Classifying Convergence Complexity of Nash Equilibria in Graphical Games Using Distributed Computing Theory
Juho Hirvonen, Laura Schmid, Krishnendu Chatterjee, Stefan Schmid
CClassifying Convergence Complexity of Nash Equilibria in Graphical GamesUsing Distributed Computing Theory
JUHO HIRVONEN,
Aalto University, Finland
LAURA SCHMID,
IST Austria, Austria
KRISHNENDU CHATTERJEE,
IST Austria, Austria
STEFAN SCHMID,
University of Vienna, Austria
Graphical games are a useful framework for modeling the interactions of (selfish) agents who are connected via an underlying topologyand whose behaviors influence each other. They have wide applications ranging from computer science to economics and biology.Yet, even though a player’s payoff only depends on the actions of their direct neighbors in graphical games, computing the Nashequilibria and making statements about the convergence time of “natural” local dynamics in particular can be highly challenging. Inthis work, we present a novel approach for classifying complexity of Nash equilibria in graphical games by establishing a connectionto local graph algorithms, a subfield of distributed computing. In particular, we make the observation that the equilibria of graphicalgames are equivalent to locally verifiable labelings (LVL) in graphs; vertex labelings which are verifiable with a constant-round localalgorithm. This connection allows us to derive novel lower bounds on the convergence time to equilibrium of best-response dynamicsin graphical games. Since we establish that distributed convergence can sometimes be provably slow, we also introduce and givebounds on an intuitive notion of “time-constrained” inefficiency of best responses. We exemplify how our results can be used in theimplementation of mechanisms that ensure convergence of best responses to a Nash equilibrium. Our results thus also give insightinto the convergence of strategy-proof algorithms for graphical games, which is still not well understood.
Modeling the interactions of multiple selfish agents, whose decisions and behavior influence each other and are in someway dependent on an underlying topology, is an important aspect of solving problems from a wide range of diversefields. Game theory offers a natural approach to this issue in the form of multiplayer network games [14, 30]. Be itresource allocation [5], routing [44], provision of public goods [30], virus inoculation [4], voting [46], or the spread oftrends through social networks [1], instances of network games are everywhere to be found. They all have in commonthat interactions and players’ decisions are in some way governed by the underlying graph and that players’ payoffsusually depend on the network that links them.Previous literature suggests that general network games, such as the rich class of congestion games [44], can bedifficult to analyze, especially if treating them as a single monolithic group. Considering their heterogeneity andcomplexity, a considerable amount of research has thus been dedicated to more restricted versions of such games. Oneapproach is to consider simplifications of specific network games such as singleton congestion games [11]. A differentand less limiting approach is to focus on “succinctly representable” network games: each player’s utility is solelydetermined by their own and their direct neighbors’ actions. These so-called graphical games, introduced by Kearns etal. [33] capture the locality of effects on players, and are useful for settings where there are only a few strong directinfluences on every agent, i.e. when the underlying graph has a low degree [14]. These games have strong connectionsto Bayesian probabilistic inference networks (“probabilistic graphical models”).Graphical games have been subject to a wide range of research in their own right, see [31] for a comprehensivesurvey. Computing the Nash equilibria of graphical games or proving results about their properties or convergence ishowever challenging in general. Previous work has dealt with aspects such as deciding whether a given strategy profile a r X i v : . [ c s . G T ] F e b uho Hirvonen, Laura Schmid, Krishnendu Chatterjee, and Stefan Schmid is a Nash equilibrium or whether equilibria exist, and classifying the complexity of these tasks [45]. The (approximate)computation of equilibria, often for various specific types of graphs, has also been a focus of a considerable number ofstudies. Previous work uses a variety of approaches for this. Kearns et al. give a polynomial dynamic programmingalgorithm for computing approximate equilibria in trees, inspired by belief propagation. This algorithm can also beextended to a distributed message-passing scheme for more general topologies [40]; however, it is important to notethat the result is not guaranteed to be efficient, nor is it strategy-proof: Indeed, players can have incentives to deviatefrom the algorithm if they are actually attempting to reach the Nash equilibrium they compute, as the computationdoes not reflect natural and rational game dynamics. Follow-up work gave some intractability results for this algorithmwhen it is used to calculate exact equilibria [25].Other works try to generalize beyond specific graph structures: Daskalakis and Papadimitriou [24] relate graphicalgames to random Markov nets. This reduction can then be used in the computation of pure Nash equilibria. The authorsfurther show polynomial complexity of finding pure equilibria on graphs with bounded treewidth, and give exponentialalgorithms for computing approximate mixed equilibria. Other papers, such as [48], use multi-agent algorithms likehill-climbing or constraint satisfaction approaches to calculate approximate equilibria. These heuristics show goodperformance, but have no worst-case guarantees. Correlated equilibria in graphical games and related intractabilityresults have also been of interest, for example in [42]. Furthermore, Jackson and Yariv [29] investigate best-responsedynamics and diffusion of behavior in a dynamic form of graphical games. They present comparative statics results andinvestigate (Bayesian) equilibrium stability when behaviors can propagate.Despite this rich and multifaceted literature, we still lack a systematic understanding and classification of graphicalgames in terms of (distributed) computational complexity; in particular, the convergence time of strategy-proofalgorithms and local dynamics such as best-response to Nash equilibria is still not properly understood, except forspecial cases such as [34].This paper presents a novel approach for shedding light on the distributed complexity of Nash equilibria and thedynamics of the convergence behavior in graphical games, using the perspective of distributed computing. In particular,we establish a connection to distributed local graph algorithms [43], which is not only natural and intuitive, but alsoallows us to leverage some of the analytical techniques and powerful results developed in this field over the last decades.More specifically, we show that the equilibria of graphical games are equivalent to locally verifiable labelings (LVL) ingraphs [35]: LVLs are solutions to distributed graph problems equivalent to the game. In a nutshell, a labelling is locallyverifiable if and only if there exists a constant-round distributed algorithm that can check if the labelling is correctaround each node. For verification, each node thus simply checks the strategy that is assigned to itself and its neighbors,and can hence locally check if this action is in equilibrium.This close connection has only been rarely explored, for example in work that considers extensive game formulationsof distributed algorithms [23]. We leverage it to derive several more general results on graphical games. First, we provethat best response dynamics can converge only as fast as a distributed algorithm can compute solutions to an equivalentgraph problem. With this, we can classify the convergence properties and efficiency of local dynamics in networkgames by classifying the corresponding LVL problem.Since we observe an inherently slow distributed convergence time in many scenarios, we then give a more fine-grained view on the costs of distributed computations, and introduce a new notion of a time-constrained inefficiency ofbest response dynamics, a natural measure of efficiency when convergence to Nash equilibria can be slow. We exemplifyour results with two simple and fundamental graphical games that have been well studied in the literature: the one-shotpublic goods game [14] and a simple minority game [18] (also called “social game” or “restaurant game” in previous lassifying Convergence Complexity of Nash Equilibria in Graphical Games Using Distributed Computing Theory literature, e.g. [13] ). For these games, we present convergence properties, lower bounds, and efficiency results, whichall serve to highlight the key novelty of our approach.We further show that in cases where best responses do not converge to the equilibrium despite the provable existenceof an efficiently computable one, our techniques can be used to point to mechanism implementations where this can beguaranteed. Since best-response dynamics are rational with respect to the restricted local knowledge of nodes, ourwork also connects to the open question of strategy-proof algorithms for distributed Nash equilibria computation thatis posited in [31]. In the following, we provide the necessary preliminaries for our results by giving definitions of game-theoretic conceptsand revisiting important cornerstones of distributed complexity theory and distributed graph problems.
We first define a multiplayer game in its general form as consisting of 𝑛 players 𝑖 ∈ I , each equipped with a pure strategyspace or action space 𝐴 𝑖 . The cartesian product 𝐴 × 𝐴 × ... × 𝐴 𝑛 of the action spaces of individual agents is denotedby A . Furthermore, every player has a utility function 𝑢 𝑖 ( (cid:174) 𝑎 ) for each profile of strategies or actions (cid:174) 𝑎 = ( 𝑎 , 𝑎 , ...𝑎 𝑛 ) ,i.e. a mapping 𝑢 𝑖 : A × 𝑁 → R . We use 𝑎 − 𝑖 to denote the joint strategy profile of all players except for player 𝑖 .Throughout the paper, we will focus on pure strategies, and later give an outlook on mixed strategy extensionsin the conclusion section. A mixed strategy 𝜎 is a probability distribution over pure strategies or actions, with thecorresponding strategy space Σ 𝑖 for each player. We can then define joint mixed strategy profiles as elements of theproduct space Σ = × 𝑖 Σ 𝑖 .We can now define the central concept of (pure) Nash equilibrium . A strategy or action profile (cid:174) 𝑎 ∗ is a Nash equilibriumif for all players 𝑖 it holds that 𝑢 𝑖 ( 𝑎 ∗ 𝑖 , 𝑎 ∗− 𝑖 ) ≥ 𝑢 𝑖 ( 𝑎 𝑖 , 𝑎 ∗− 𝑖 ) ∀ 𝑎 𝑖 ∈ 𝐴 𝑖 (1)In other words, no player in a Nash equilibrium can gain a higher utility by unilaterally deviating. We say that thestrategy 𝑎 ∗ 𝑖 is a best response to the rest of the strategy profile 𝑎 ∗− 𝑖 .In the case of mixed strategies, a Nash equilibrium is characterized by the relation 𝑢 𝑖 ( 𝜎 ∗ 𝑖 , 𝜎 ∗− 𝑖 ) ≥ 𝑢 𝑖 ( 𝑎 𝑖 , 𝜎 ∗− 𝑖 ) ∀ 𝑎 𝑖 ∈ 𝐴 𝑖 , (2)i.e. it suffices to check pure strategy deviations to confirm that a mixed strategy profile is an equilibrium.We can now define graphical games , given by a triple (A , 𝑢, 𝑁 ) . As introduced in [33], graphical games are a conciselyrepresentable form of multiplayer games on networks. A network or a graph 𝑁 = ( 𝑉 , 𝐸 ) consists of a set of 𝑛 nodes 𝑉 and a set 𝐸 of edges between pairs of nodes. The nodes of the network 𝑁 then represent the agents, or players, in thegraphical game. We define the local neighborhood of a node 𝑣 as 𝐵 ( 𝑣 ) ⊆ { , .., 𝑛 } = { 𝑗 ∈ 𝑉 , ( 𝑣, 𝑗 ) ∈ 𝐸 } , with 𝑣 ∈ 𝐵 ( 𝑣 ) as well. As in normal form multiplayer games, each agent 𝑣 is equipped with an action space 𝐴 𝑣 , and A denotes theproduct of these action spaces. A player’s utility function 𝑢 𝑣 ( (cid:174) 𝑎 𝑣 ) now depends only on a strategy profile restricted totheir local neighborhood 𝐵 ( 𝑣 ) , i.e. the partial strategy profile (cid:174) 𝑎 𝑣 . We denote the product of the individual agents’ utilityfunctions by 𝑢 .Throughout this work, we will assume that the network 𝑁 has a constant maximum degree Δ . uho Hirvonen, Laura Schmid, Krishnendu Chatterjee, and Stefan Schmid In this work, we consider the question of convergence to Nash equilibria via best-response dynamics , a specific exampleof local dynamics . We assume that players can update their strategy between rounds of the game. To this end, they usethe rule to update their strategy with the best response to their neighborhood’s strategy profile. That is, the action of anode 𝑣 in round 𝑡 is a best response to the partial strategy profile (cid:174) 𝑎 𝑣 ( 𝑡 − ) . There is some fixed order on the actions thatis used to break ties. We assume that nodes only have local information and cannot look beyond their neighborhood.Their restricted knowledge makes such local dynamics rational. In this work, we will subsequently only consider bestresponses. However, our results also hold for more general local dynamics.In order to have a reasonable definition of a running time for local dynamics, we define a model of fair best responses .The play consists of fair rounds . During each round the adversary schedules all agents to act exactly once and one at atime. The convergence time of best-response dynamics on a fixed network is the maximum number of fair rounds untilall players have reached a Nash equilibrium over, all possible orders of play. For random initial strategy profiles we saythat best-response dynamics converge if they converge with high probability over the initial strategy assignment. We next present some preliminaries on distributed graph algorithms and complexity. In particular, this paper willestablish a connection of graphical games to the
LOCAL model of computation [38, 43]: in this model, we are given afixed network 𝑁 = ( 𝑉 , 𝐸 ) (a graph) connecting 𝑛 = | 𝑉 ( 𝑁 )| nodes. The nodes collaboratively aim to solve a given task,but can only communicate with their neighbors in the graph. The computation proceeds in synchronous rounds, and ineach round each node in the graph can send a message to each of its neighbors (there is no bound on the message size),receive a message from each neighbor, and perform local computations (there is no bound on the complexity of theselocal computations). Initially the nodes do not know anything about the input network. Each node is responsible forcomputing its own part of the output. The (distributed) complexity of a local algorithm is measured in the number ofcommunication rounds until all nodes have computed their outputs.To identify the nodes, each node gets an 𝑂 ( log 𝑛 ) -bit unique name as an input. We will also consider the randomizedLOCAL model, where instead each node has access to its own private source of random bits. The randomized LOCALmodel is at least as strong as the deterministic LOCAL model, as randomness can be used to generate unique identifierswith high probability.The crucial property of the LOCAL model is that in 𝑡 communication rounds each node can gather exactly its 𝑡 -hopneighborhood in the network. Information that is outside this radius cannot affect the actions of the node, since ithasn’t had time to travel to it. Since communication is not limited, the nodes can gather all information about their 𝑇 -hop neighborhood in 𝑇 rounds. This implies that distributed algorithm with complexity 𝑇 , without loss of generality,can be thought of as function that maps the input-labelled 𝑇 -hop neighborhoods to the possible outputs.The 𝑇 -neighborhood of a node 𝑣 is denoted by 𝐵 ( 𝑣,𝑇 ) . The radius-1 neighborhood is denoted simply by 𝐵 ( 𝑣 ) .We are particularly interested in a class of LOCAL problems called locally verifiable labelling (LVL) problems. AnLVL consists of an alphabet Σ and a set of configurations C . The alphabet Σ is simply some possibly infinite set of labels.Each configuration 𝐶 ∈ C is a subgraph of 𝑁 centered on some node 𝑣 with radius at most 𝑘 for some constant 𝑘 (theverification radius). Each node of 𝐶 is labelled with some element 𝜎 ∈ Σ . In this work we only consider LVLs withradius 𝑘 =
1. A mapping 𝑓 : 𝑉 → Σ of the graph 𝑁 is a solution to 𝑃 if and only if each 𝑓 -labelled 𝑘 -neighborhood of 𝑁 is a configuration in C . Locally verifiable labellings are a generalization of locally checkable labellings (LCLs) [39], afamily of problems that has been studied extensively in recent years (for example [6, 8, 15, 16, 19–21]). lassifying Convergence Complexity of Nash Equilibria in Graphical Games Using Distributed Computing Theory An important result in LOCAL algorithms theory, which will also be relevant for our work, is related to a “complexitygap”: on general bounded-degree graphs, i.e., if the maximum degree of the graph 𝑁 is bounded by a constant Δ , thedeterministic distributed complexity of an LVL is either 𝑂 ( log ∗ 𝑛 ) , or Ω ( log 𝑛 ) and the randomized complexity is either 𝑂 ( log ∗ 𝑛 ) or Ω ( log log 𝑛 ) [19]. Here, log ∗ 𝑛 is the iterated logarithm (pronounced “log-star”), a function which growssignificantly more slowly than the logarithm: e.g. log ∗ of the number of atoms in the observable universe is 5. Formally,log ∗ 𝑛 is defined as: ∀ 𝑥 ≤ ∗ 𝑥 : = , ∀ 𝑥 > ∗ 𝑥 : = + log ∗ ( log 𝑥 ) The complexity gap on bounded-degree networks is also the best possible: there exists an LVL such that its deterministiccomplexity is Θ ( log 𝑛 ) and the randomized complexity is Θ ( log log 𝑛 ) [16, 19, 28]. This also proves that the deterministicand randomised complexities of an LVL can be exponentially separated.On other graph families, the complexity gap can be even larger. For example, on paths and cycles LVLs havecomplexity either 𝑂 ( log ∗ 𝑛 ) or Θ ( 𝑛 ) [19], and on grids and toruses on 𝑛 nodes, either 𝑂 ( log ∗ 𝑛 ) or Ω (√ 𝑛 ) [17].Throughout this paper, we will call a distributed algorithm with complexity 𝑂 ( log ∗ 𝑛 ) efficient . Our work is motivated by our observation that graphical games and local algorithms are fundamentally connected. Inparticular, we observe that all Nash equilibria (and in fact all equilibria that are based on local information only) areLVLs. If we assume that agents playing a game converge to an equilibrium they are implicitly solving the correspondingcomputational task.The LOCAL model has two particular properties. First, it is possible to prove unconditional impossibility results inthe LOCAL model. Existing results cover many LVLs that are potentially interesting from the perspective of gametheoretical applications (see e.g. [7, 9, 16, 17, 19, 21, 38]). Second, it is a strong distributed model in the sense thatalgorithms in the LOCAL model are only limited by information propagation, and not e.g. by failures or bandwidthlimitations. This means that any impossibility results proven in the LOCAL model apply to a wide range of more realisticmodels. In particular, they apply to any model of games where the play of the agents is constrained by the availableinformation.An interesting property of LVLs in the LOCAL model is the complexity gap: they can either be computed efficientlyin 𝑂 ( log ∗ 𝑛 ) rounds, or require Ω ( log 𝑛 ) rounds in the deterministic LOCAL model and Ω ( log log 𝑛 ) rounds in therandomized LOCAL model [19]. If we can show that the Nash equilibria of a game are not efficiently solvable then thisimplies that in that game cannot best responses cannot converge fast.To formalize this connection, we must transfer existing impossibility results for computational tasks in the distributedsetting to a model of games. To do this, we first establish an equivalence between the Nash equilibria of graphical gamesand locally verifiable labellings. Then, to transfer impossibility results to our model of fair sequential best responses,we show that the LOCAL model can simulate best responses. This implies that if all Nash equilibria of a game are hardto compute as LVLs, then the best-response dynamics cannot converge to those equilibria fast.Theorem 3.1. Let 𝐺 = (A , 𝑢, 𝑁 ) be a graphical game. The Nash equilibria of 𝐺 uniquely define a locally verifiablelabelling 𝑃 . Proof. This follows from the locality of the utility functions of graphical games. Let 𝐺 = (A , 𝑢, 𝑁 ) be a game ona network, and let (cid:174) 𝑎 be some strategy profile of 𝐺 . Observe that (cid:174) 𝑎 is a Nash equilibrium if and only if for each agent 𝑣 ∈ 𝑉 ( 𝑁 ) their current strategy 𝑎 𝑣 maximizes their utility over all choices, and this only depends on the strategies 𝑎 𝑢 uho Hirvonen, Laura Schmid, Krishnendu Chatterjee, and Stefan Schmid for all neighbors 𝑢 ∈ 𝐵 ( 𝑣 ) . That is, we can define the set of Nash equilibria of a game 𝐺 as an LVL 𝑃 ( 𝐺 ) as follows: let S consist of every radius-1 subgraph 𝑆 of 𝑁 . Let C consist of every copy of each 𝑆 ∈ S labelled with the actions of 𝐺 such that the strategy of the center node is a best response to the strategies of its neighbors with respect to 𝑢 . We havethat the alphabet Σ consists of all possible actions of A . □ It should be stressed that the correspondence does not use any properties specific to pure Nash equilibria. Forexample mixed Nash equilibria of graphical games also define LVLs.Next we show how best responses can be simulated in the LOCAL model. Let 𝐺 = (A , 𝑢, 𝑁 ) be a graphical game. Weconstruct a corresponding instance in the LOCAL model by taking the network 𝑁 . Then, if we consider the deterministicLOCAL model, an adversary assigns 𝑂 ( log 𝑛 ) -bit names to the nodes. In the randomized LOCAL model, each nodegets a uniformly random infinite string of bits as input instead. We show that if the best responses converge in 𝑇 ( 𝑛 ) rounds, then this can be turned into a distributed algorithm that computes the corresponding Nash equilibrium in 𝑂 ( log ∗ 𝑛 + 𝑇 ( 𝑛 )) rounds.Theorem 3.2 (best responses correspondence). Fix a function 𝑇 . Consider a graphical game 𝐺 .(1) If the best-response dynamics converge on 𝐺 in 𝑇 ( 𝑛 ) rounds from a constant initial strategy profile, then there existsa deterministic distributed algorithm in the LOCAL model that solves the LVL corresponding to the Nash equilibriaof each 𝐺 in 𝑂 ( log ∗ 𝑛 + 𝑇 ( 𝑛 )) rounds.(2) If the best-response dynamics converge with high probability on 𝐺 in 𝑇 ( 𝑛 ) rounds from a random initial strategyprofile, then there exists a randomized distributed algorithm in the LOCAL model that solves the LVL correspondingto the Nash equilibria of 𝐺 in 𝑂 ( log ∗ 𝑛 + 𝑇 ( 𝑛 )) rounds with high probability. The proof of Theorem 3.2 is given in Appendix A. The theorem implies that the convergence time of best-responsesis bounded by impossibility results from distributed computing.Corollary 3.3.
Assume that the Nash equilibria of a graphical game 𝐺 , as LVLs, have deterministic complexity Ω ( 𝑇 ( 𝑛 )) and randomized complexity Ω ( 𝑇 ′ ( 𝑛 )) , for any 𝑇 ( 𝑛 ) ,𝑇 ′ ( 𝑛 ) = Ω ( log ∗ 𝑛 ) . Then best-response dynamics for 𝐺 require Ω ( 𝑇 ( 𝑛 )) and Ω ( 𝑇 ′ ( 𝑛 )) rounds to converge from a constant and a randomized initial strategy profile, respectively. Proof. This follows from Theorem 3.2: if best-response dynamics converge faster, then this can be turned into a fastdistributed algorithm, a contradiction. □ The complexity gap of LVLs in the LOCAL model, in the context of our result, implies that if none of the Nashequilibria of a game are efficiently computable, then best responses converge significantly slower.Corollary 3.4.
Let 𝐺 be a game such that none of its Nash equilibria can be solved in time 𝑂 ( log ∗ 𝑛 ) as LVLs. Thenthe best-response dynamics require Ω ( log 𝑛 ) rounds to converge from constant initial strategy profile, and Ω ( log log 𝑛 ) rounds to converge from a random initial strategy profile. In the next sections we look at examples of graphical games, their Nash equilibria, and the corresponding LVLproblems. We illustrate the correspondence in Figure 1.
This game deals with the provision of public goods such as finding a cure for a disease or filling an important supply,assuming that only the maximal contribution counts towards the provision level instead of the sum of all players’ lassifying Convergence Complexity of Nash Equilibria in Graphical Games Using Distributed Computing Theory Minority GameBest-shot Public Goods Game ba P PP PFF FFFF FF F -1 -1 -1-1 -1 -1 -1+1 +1 +1+1 +1 +1
Fig. 1. We illustrate the correspondence between Nash equilibria of graphical games and LVLs for the games described in Sections3.1 and 3.2. a, In the best-shot public goods game, nodes can choose to produce a good (P) or to forego producing (F). The Nashequilibrium of this game is a solution to the maximal independent set problem. A node in this state can not do any better given thatthe others play their equilibrium strategy, and they can check this by simply by looking at their neighbors. This also means that theycan check whether their computed solution to the maximal independent set is correct. b, The minority game gives players a choicebetween two actions, here -1 and +1. Their goal is to choose the opposite of the majority of their neighbors. The Nash equilibrium is alocally optimal cut: players have at least as many neighbors playing the opposite strategy as the same strategy. This is also a LVL:every agent can again locally check that they are in equilibrium, i.e. the correctness of the computed solution. contributions [14, 22]. In contrast to public goods games often studied as social dilemmas, i.e. where the social optimumis reached by all players cooperating (producing) despite the incentive to do nothing, the problem here is not only oneof free-riding, but also one of coordination. Player groups have to figure out which of them should optimally be the oneto provide the good, in order to avoid redundant costs. Here we consider the simplest version of the game where playersonly have two choices, to provide a good, or not to provide it. More formally, each agent has two possible actions, i.e. 𝐴 𝑖 = { 𝑃, 𝐹 } . Players’ utilities 𝑢 𝑖 are as following: if a focal agent plays 𝐹 and one of their neighbors plays 𝑃 , the utilityis 𝑢 𝑖 =
1. If the agent plays 𝑃 , 𝑢 𝑖 = − 𝑐 , but if they and all their neighbors play 𝐹 , the utility is 0. Simply put, providingthe public good is costly, and it is preferable for a player to have a neighbor do so; however, they are still better offproviding it themselves than if nobody in their neighborhood does so.The correspondence of this game with distributed graph problems is a prominent one: As shown in [14], the Nashequilibria of the best-shot public goods game correspond to maximal independent sets of agents playing 𝑃 . The set isindependent (i.e. no two agents with 𝑃 are adjacent), as two adjacent agents with the strategy 𝑃 would have incentiveto choose 𝐹 . On the other hand the set is maximal (i.e. each agent plays 𝑃 or has a neighbor that plays 𝑃 ) as otherwisesuch an agent would have incentive to play 𝑃 .Maximal independent set is an efficiently solvable LVL: it can be computed in 𝑂 ( log ∗ 𝑛 ) rounds [41], and this isknown to be the best possible complexity [38]. Correspondingly, best responses converge in two fair rounds for thebest-shot public goods game. We can compare this with the complexity analysis of best-shot public goods games in [34],and point out that our approach considers a natural concept of distributed complexity whereas previous work usuallytakes a different, more centralized view. uho Hirvonen, Laura Schmid, Krishnendu Chatterjee, and Stefan Schmid Theorem 3.5.
Fair best responses in the best-shot public goods game converge in two rounds from any initial configuration.
Proof. Assume that the system starts with some arbitrary strategy profile. We claim that after the first round theset of agents playing 𝑃 is independent and after the second round it is maximal.Assume that after the first round neighboring agents 𝑢 and 𝑣 play 𝑃 . Then the one that played last would haveseen that the other plays 𝑃 , and their best response is 𝐹 . Therefore the set of agents playing 𝑃 is independent. In thesecond round no agent will switch from 𝑃 to 𝐹 , as all of their neighbors play 𝐹 . If an agent plays 𝐹 and their neighborsplay 𝐹 it will switch to 𝑃 . After two rounds the agents playing 𝑃 form a maximal independent set, which is a Nashequilibrium. □ In this elementary anti-coordination game (also called social game) [13, 18], players attempt to do the opposite ofwhat their neighbors are doing. That is, they attempt to anti-coordinate with what the majority of their surroundingco-players do. For example, this could describe a situation where agents try to choose a restaurant to go to that’s notoverly crowded. We formalize this with a game where players again have two possible actions, i.e. 𝐴 𝑖 = {− , } . A focalplayer’s utility 𝑢 𝑖 is defined as 1 + | (cid:205) 𝑗 ∈ 𝑉 𝑁 ( 𝑖 ) [ 𝑎 𝑗 = ] − (cid:205) 𝑗 ∈ 𝑉 𝑁 ( 𝑖 ) [ 𝑎 𝑖 = − ]| , i.e the absolute value of the differencebetween the number of neighbors with a different label and the same label, plus 1.The Nash equilibria correspond to strategy profiles where each agent has at least as many neighbors playing theopposite strategy as the same strategy. In this game, we again have a correspondence with a prominent graph problem:in distributed computing, the corresponding LVL is known as locally optimal cut. It is known to be a hard problem [9]:on 3-regular graphs it requires Ω ( log 𝑛 ) deterministic time and Ω ( log log 𝑛 ) randomized time.Theorem 3.6. The convergence time of best-responses for the minority game is Ω ( log 𝑛 ) from a constant initial stateand Ω ( log log 𝑛 ) from a random initial state. Proof. Balliu et al. [9] have shown that finding a locally optimal cut requires Ω ( log 𝑛 ) deterministic time and Ω ( log log 𝑛 ) randomized time in the LOCAL model. This, together with Corollary 3.3 implies the theorem. □ The impossibility result of Balliu et al. holds even if the algorithm is promised that the network is a 3-regular tree ora 3-regular graph of high girth. Therefore the result also applies to best responses in these graph families.
In this section we study the efficiency of best responses with respect to computational constraints. Traditionally, forexample in the context of
Price of Anarchy [36], one compares the total welfare under the best strategy profile to thetotal welfare under the worst Nash equilibrium . However, it might be that the best solution is hard to compute in adistributed fashion. In fact, it might be that the worst Nash equilibrium is also hard to compute.We will define a notion of computational inefficiency of best responses with respect to a time bound 𝑇 . We showthat there exist games such that we can bound the inefficiency of best responses away from the Price of Anarchy. Thisillustrates that Price of Anarchy does not always fairly reflect the quality of solutions computed by best responses whentime constraints are taken into account.As we showed in Theorem 3.6, best responses provably might not converge efficiently to an equilibrium. Thetechnique we present in this section allows us to study the evolution of total welfare produced by best responses evenbefore convergence . We do this by bounding the total welfare produced by any fast distributed algorithm . lassifying Convergence Complexity of Nash Equilibria in Graphical Games Using Distributed Computing Theory To measure the performance of best responses, we compare the total welfare produced by 𝑇 fair rounds of bestresponses to the best solution that can be computed in 𝑇 rounds in the randomized LOCAL model. We assume thatthe system starts from a random initial strategy profile. For a fixed game 𝐺 = (A , 𝑢, 𝑁 ) , let OPT ( 𝑁,𝑇 ) denote the bestsolution, in terms of total welfare, that can be computed on 𝑁 in the LOCAL model in 𝑇 . Let BR ( 𝑁,𝑇 ) denote therandom variable that represents the solution computed on 𝑁 by 𝑇 rounds of best responses starting from a randominitial strategy. We consider random initial strategy profiles, as a constant (or worst-case) strategy profile can guidebest responses to perform poorly, depending on the game.For a game 𝐺 = (A , 𝑢, 𝑁 ) and a time bound 𝑇 , we define the 𝑇 -inefficiency of best responses asIoBR ( 𝐺,𝑇 ) = 𝑢 ( OPT ( 𝑁,𝑇 )) 𝐸 [ 𝑢 ( BR ( 𝑁,𝑇 ))] . To estimate the quantity IoBR ( 𝐺,𝑇 ) we will bound OPT ( 𝑁,𝑇 ) from above using computational arguments, and boundBR ( 𝑁,𝑇 ) from below using both arguments about the behavior of best-response dynamics and computational arguments.Note that when we consider the best strategy profile that is computable in 𝑇 communication rounds, we considerdistributed algorithms for optimization problems. That is, the algorithms might not compute a solution correspondingto some Nash equilibrium, but more generally any strategy assignment that tries to optimize the total welfare of allagents.In the next two sections we show how the inefficiency of best responses can be estimated using tools from distributedcomputing. We begin by analysing the inefficiency of best responses in the best-shot public goods game. We show that Price ofAnarchy can be bounded away from the inefficiency of best responses.We prove that there exists an infinite family of best-shot public goods game instances such that even though goodand bad solutions exist, no distributed algorithm can compute them efficiently. Therefore best responses cannot producethese solutions either.To construct these instances, we argue that there exist graphs which have good and bad solutions and graphs thatlook locally the same to the first class of graphs, but do not have any good or bad solutions. We can then argue usingstandard indistinguishability arguments from distributed computing that fast algorithms perform poorly.The following theorem states the outcome of our analysis for the best-shot public goods game.Theorem 4.1.
Fix a function 𝑇 = 𝑜 ( log 𝑑 𝑛 ) . For every 𝑑 ≥ and sufficiently large 𝑛 , there exists an instance 𝐺 = (A , 𝑢, 𝑁 ) of the best-shot public goods game such that 𝑁 is a 𝑑 -regular network of size 𝑛 ≥ 𝑛 and the following hold.(1) PoA ( 𝐺 ) = − 𝑐 /( 𝑑 + ) − 𝑐 / .(2) IoBR ( 𝐺,𝑇 ) ≤ 𝑑 − 𝑐 ln 𝑑𝑑 − 𝑐 ( + 𝜀 ) ln 𝑑 for any 𝜀 > . According to Theorem 3.5, best responses converge in two fair rounds. Therefore to analyze the 𝑇 -inefficiency ofbest responses, we can bound the total welfare of the worst Nash equilibrium that can be computed in 𝑇 communicationrounds.In the best-shot public goods game the total welfare is maximized by minimizing the number of producing agentswhile ensuring that each non-producing agent is adjacent to a producing agent. Such sets are known as dominating sets .Not all dominating sets are maximal independent sets, but all maximal independent sets are dominating sets. Thus wecan bound the size of the minimum maximal independent set by the size of the minimum dominating set. uho Hirvonen, Laura Schmid, Krishnendu Chatterjee, and Stefan Schmid The following lemma bounds the best solutions that distributed algorithms can compute on certain networks thathave good solutions.Lemma 4.2.
There is no randomized algorithm in the LOCAL model that finds an independent set of size > (( + 𝜀 ) ln 𝑑 / 𝑑 ) 𝑛 or a dominating set of size < (( + 𝜀 ) ln 𝑑 / 𝑑 ) 𝑛 in expectation in 𝑜 ( log 𝑑 𝑛 ) rounds on the networks from Lemma B.2. Lemma B.2 is stated in Appendix B.2. The proof of Lemma 4.2 is a standard indistinguishability argument fromdistributed computing. There exist regular high-girth networks with good and bad solutions (Lemma B.2), and regularhigh-girth networks with no good or bad solutions (Lemma B.1). Since networks are locally indistinguishable (i.e. fromthe perspective of any node, look the same up to any distance 𝑇 ( 𝑛 ) = 𝑜 ( log 𝑑 𝑛 ) ), any distributed algorithm must behavethe same way in both networks. Since the size of the solutions in the latter network is bounded, solutions that arelarger or smaller, respectively, cannot be found in the network that does have such solutions. The proof is given inAppendix B.Using Lemma 4.2 we can prove Theorem 4.1.Proof of Theorem 4.1. We bound the Price of Anarchy, the best 𝑇 -time computable solution, and the performanceof best responses on the networks given by Lemma B.2.(1) Best solution and worst equilibrium. The best solution (also a Nash equilibrium) on 𝑁 contains a 1 /( 𝑑 + ) -fractionof nodes in the producing set, giving a total welfare equal to ( − 𝑐 /( 𝑑 + )) 𝑛 . The bipartition gives the worst Nash equilibrium: half of the nodes are in the independent set, giving a total welfare of ( − 𝑐 / ) 𝑛 . The Price ofAnarchy is thus exactly 1 − 𝑐 /( 𝑑 + ) − 𝑐 / . (2) Best 𝑇 ( 𝑛 ) -time computable solution. The best solution that can be computed in 𝑇 ( 𝑛 ) = 𝑜 ( log 𝑑 𝑛 ) rounds on 𝑁 ,by Lemma 4.2, has at least (( + 𝜀 ) ln 𝑑 / 𝑑 ) 𝑛 nodes in it (as it corresponds to a dominating set). This gives a totalwelfare of (cid:18) − 𝑐 ( + 𝜀 ) ln 𝑑𝑑 (cid:19) 𝑛. (3) Total welfare of best responses. Since best responses converge to an equilibrium and a distributed algorithm cansimulate best responses (Lemma A.1), the worst solution that best responses could compute, in expectation, isbounded by the largest maximal independent set a distributed algorithm can compute in 𝑂 ( 𝑇 ( 𝑛 )) rounds. Sincemaximal independents are a subset of independent sets, by Lemma 4.2 the worst solution best responses cancompute in expectation has at most ( + 𝜀 )( ln 𝑑 / 𝑑 ) 𝑛 nodes in the producing set. The total welfare is at least (cid:18) − 𝑐 ( + 𝜀 ) ln 𝑑𝑑 (cid:19) 𝑛. From the last two we get that on 𝑁 , the 𝑇 -inefficiency of best responses is at most 𝑑 − 𝑐 ln 𝑑𝑑 − 𝑐 ( + 𝜀 ) ln 𝑑 for any 𝑇 = 𝑜 ( log 𝑑 𝑛 ) . □ Similar to the previous section, we can show that there exist instances of the minority game on which best responsesperform relatively better than Price of Anarchy would indicate. lassifying Convergence Complexity of Nash Equilibria in Graphical Games Using Distributed Computing Theory The proof again uses uses an indistinguishability argument to show that on certain networks fast distributedalgorithm cannot find good solutions even though they do exist. In addition, we analyse best responses in the minoritygame and note that they only improve the total welfare. We prove the following theorem.Theorem 4.3.
Fix a function 𝑇 = 𝑜 ( log 𝑑 𝑛 ) . For every even 𝑑 ≥ and large enough 𝑛 , there exists a 𝑑 -regular instanceof the minority game on 𝑛 ≥ 𝑛 nodes such that the following hold.(1) PoA ( 𝐺 ) = ( 𝑑 + ) .(2) IoBR ( 𝐺,𝑇 ) ≤ + 𝑑 /√ 𝑑 − . To prove Theorem 4.3 we construct two networks such that both look locally the same but one has a large cut and theother does not. No distributed algorithm can find a large cut in the first network. This will imply that the 𝑇 -inefficiencyof best responses is bounded away from the Price of Anarchy.Lemma 4.4. There is no randomized LOCAL algorithm that finds a cut with more than ( / + /√ 𝑑 − )| 𝐸 | edges inexpectation in 𝑇 ( 𝑛 ) = 𝑜 ( log 𝑑 𝑛 ) rounds on the bipartite networks from Lemma B.3. Lemma B.3 is presented in Appendix B. The proof is similar to the proof of Lemma 4.2. Since the networks 𝑁 fromLemma B.3 and 𝑁 ′ from Lemma B.4 (also in Appendix B) look locally the same to any distributed algorithm withrunning time 𝑇 = 𝑜 ( log 𝑑 𝑛 ) , we can argue that expected size of the cut in on any 𝑁 is at most as large as the optimumsolution on 𝑁 ′ .To estimate the worst-case behavior of best responses, we note that in the minority game best responses are monotone ,i.e. they never decrease the total utility.Lemma 4.5. The strategy profile computed by best responses from a random initial strategy profile has at least | 𝐸 |/ cutedges in expectation. Proof. First, note that a random strategy profile cuts exactly half of the edges in expectation: each edge hasprobability exactly 1/2 to be a cut edge.Now consider a best-response move by some agent 𝑣 . Since 𝑣 is switching, it has at least one more cut edge in thenew strategy profile. Since this change only affects edges around the agent, the total number of cut edges also increasesby at least one. □ We are now ready to prove Theorem 4.3, estimating total welfare in the minority game.Proof of Theorem 4.3. We again bound the three quantities.(1) Price of Anarchy. On the network 𝑁 , the best solution cuts all edges. The total welfare is 2 ( 𝑑 + ) 𝑛 . On the otherhand, every Nash equilibrium cuts at least half of the edges: every agent has at least half of their neighbors onthe other side of the cut. For even 𝑑 the total welfare in the equilibrium is at least 𝑛 . The Price of Anarchy is2 ( 𝑑 + ) for even 𝑑 .(2) Best 𝑇 ( 𝑛 ) -time computable solution. By Lemma 4.4, the largest cut that can be computed in 𝑜 ( log 𝑑 𝑛 ) roundson network 𝑁 has at most ( / + /√ 𝑑 − )| 𝐸 | edges in it. The total welfare is at most (cid:0) + 𝑑 /√ 𝑑 − (cid:1) 𝑛 inexpectation.(3) Performance of best responses. By Lemma 4.5, best responses compute a cut of size at least | 𝐸 |/ 𝑛 .The 𝑇 ( 𝑛 ) -inefficiency of best responses is at most 1 + 𝑑 /√ 𝑑 − 𝑇 ( 𝑛 ) = 𝑜 ( log 𝑑 𝑛 ) . □ uho Hirvonen, Laura Schmid, Krishnendu Chatterjee, and Stefan Schmid In this section we show that every graphical game 𝐺 with an efficiently solvable Nash equilibrium has a special property.It is possible to construct a related game 𝐺 ′ that we call a simulation game : in 𝐺 ′ the best responses converge in onefair round and the equilibrium is equivalent to an equilibrium of 𝐺 . The game is constructed so that the best responsessimulate a distributed algorithm for computing a Nash equilibrium of the original game.As the game 𝐺 ′ simulates a distributed algorithm, the simulation gains other properties of the algorithm as well.In particular, if there is an algorithm that computes a Nash equilibrium from some subset of equilibria with desirableproperties, then best responses also converge to a Nash equilibrium from the same subset in the simulation game.The simulation game can be constructed locally by a distributed algorithm. No similar game constructions exists forgraphical games that do not have efficiently solvable Nash equilibria. To define simulation games, we need to consider algorithms in a specific normal form , the existence of which is impliedby the speedup result of Chang et al. [19]. Lemma 5.3 states that 𝑂 ( log ∗ 𝑛 ) -time algorithms can be decomposed intotwo phases. In the first phase the algorithm computes a distance- ( 𝑡 + ) coloring for some constant parameter 𝑡 thatdepends on the problem. Then a 𝑡 -round algorithm is applied with the coloring as an input. We will construct gameswhere the best responses construct these colorings and then choose the output of the algorithm on that particularcoloring.Now consider a game 𝐺 = (A , 𝑢, 𝑁 ) that has an efficiently computable Nash equilibrium. Let F be a distributedalgorithm in normal form that computes some Nash equilibrium (that is, LVL 𝑃 ) of 𝐺 with the smallest possible constantrunning time 𝑡 . Define a 𝑡 -simulation game 𝐺 ′ = (A ′ , 𝑢 ′ , 𝑁 ′ ) of 𝐺 as follows.(1) The set of agents is the same as in 𝐺 . In the network 𝑁 ′ connect two nodes 𝑢 and 𝑣 if and only if their distancein 𝑁 is at most 4 𝑡 + 𝐴 ′ 𝑣 of each agent 𝑣 encode the possible locally correct simulations of F . This is defined in two parts 𝐴 ′ 𝑣 = 𝑅 𝑣 × Σ . The first part 𝑅 𝑣 consists of all possible labellings of the 𝑡 -neighborhood of 𝑣 in 𝑁 with distinctcolors from { , , . . . , Δ 𝑡 + + } . The second part Σ consists of the possible output labels of 𝑃 . Include the pair ( 𝑟, 𝜎 ) in 𝐴 ′ 𝑣 if and only if F would output 𝜎 on 𝑣 given 𝑟 as the input coloring of 𝐵 ( 𝑣, 𝑡 ) , where 𝐵 ( 𝑣, 𝑡 ) denotesthe 𝑡 -hop neighborhood of 𝑣 in 𝑁 . In addition there is the empty action.(3) The utility functions 𝑢 𝑣 encode the correct simulations. The utility 𝑢 𝑣 ( 𝑠 ) = 𝑟 𝑣 ∈ 𝑅 𝑣 is compatible with the colorings 𝑟 𝑢 ∈ 𝑅 𝑢 of each neighbor 𝑢 with a non-empty strategy.That is, for each 𝑤 ∈ 𝐵 𝑁 ( 𝑣, 𝑡 ) ∩ 𝐵 𝑁 ( 𝑢, 𝑡 ) , we have that 𝑐 𝑣 ( 𝑤 ) = 𝑐 𝑢 ( 𝑤 ) or 𝑐 𝑢 is empty. Second, the colorings forma proper ( 𝑡 + ) -hop coloring of 𝑁 . That is, if we map all the compatible colorings 𝑐 𝑣 for all 𝑣 onto 𝑁 , then twonodes in 𝑁 at distance at most 2 𝑡 + 𝑁 ′ if they are withindistance 4 𝑡 + 𝑁 , it is possible to encode this in 𝑢 . The color assigned to each agent at distance 𝑡 from someagent 𝑣 must differ from the colors of other agents within distance 2 𝑡 + 𝑡 + 𝐺 = (A , 𝑢, 𝑁 ) and 𝐺 ′ = (A ′ , 𝑢 ′ , 𝑁 ′ ) . We say that 𝐺 ′ is lassifying Convergence Complexity of Nash Equilibria in Graphical Games Using Distributed Computing Theory 𝑘 -constructible given 𝐺 if there exists a 𝑘 -round distributed algorithm F that on 𝑁 , given 𝐺 as input computes 𝐺 ′ in thefollowing sense. For each 𝑣 ∈ 𝑁 ′ , the algorithm can output the set of neighbors of 𝑣 in 𝑁 ′ . In addition, for each 𝑣 itoutputs the action set 𝐴 𝑣 of 𝑣 and the utility function 𝑢 𝑣 of 𝑣 . We say that 𝐺 ′ corresponds to 𝐺 if there exists a mapping 𝜑 𝑣 : 𝐴 ′ 𝑣 → 𝐴 𝑣 for all 𝑣 such that if (cid:174) 𝑎 ′ = ( 𝑎 ′ , . . . , 𝑎 ′ 𝑛 ) is a Nash equilibrium of 𝐺 ′ , then (cid:174) 𝑎 = ( 𝜑 ( 𝑎 ′ ) , . . . , 𝜑 ( 𝑎 ′ 𝑛 )) is a Nashequilibrium of 𝐺 .Theorem 5.1. If a game 𝐺 has a Nash equilibrium that is solvable in time 𝑂 ( log ∗ 𝑛 ) as an LVL, then there is a 𝑡 -simulation game 𝐺 ′ = ( 𝐴 ′ , 𝑢 ′ , 𝑁 ′ ) of 𝐺 = ( 𝐴, 𝑢, 𝑁 ) with the following properties:(P1) Best responses converge in one fair round from the empty initial strategy profile in 𝐺 ′ .(P2) 𝐺 ′ corresponds to 𝐺 .(P3) 𝐺 ′ is ( 𝑡 + ) -constructible given 𝐺 . Properties (P1) and (P2) imply that every game with an efficiently solvable Nash equilibrium has a 𝑡 -simulation gamethat converges to an equivalent Nash equilibrium in one fair round. Property (P3) is in contrast with the followingtheorem which states that similar constructions do not exist not exist for games that do not have any efficiently solvableNash equilibria.Theorem 5.2. Let 𝐺 = (A , 𝑢, 𝑁 ) be a game such that the Nash equilibria of 𝐺 as an LVL require Ω ( 𝑇 ( 𝑛 )) time tocompute in the deterministic LOCAL model, for any 𝑇 ( 𝑛 ) = Ω ( log ∗ 𝑛 ) . Then there is no 𝑘 -constructible corresponding game 𝐺 ′ given 𝐺 , for any constant 𝑘 , such that best responses converge in 𝑜 ( 𝑇 ( 𝑛 )) rounds. The proof of Theorem 5.2 follows from an application of our simulation theorem, Theorem 3.2. If such games existed,a distributed algorithm could construct and simulate them, yielding a fast distributed algorithm for computing a Nashequilibrium of the original game.The requirement of constructability is important, as without this property the game 𝐺 ′ could simply encode thestructure of the Nash equilibria of 𝐺 in its actions or the utility function. Before proving Theorem 5.1, we need the following technical lemma. It establishes that 𝑂 ( log ∗ 𝑛 ) -time solvable LVLscan also be solved by algorithms in the required normal form.Lemma 5.3. Assume that LVL 𝑃 can be solved in 𝑂 ( log ∗ 𝑛 ) rounds by a deterministic distributed algorithm. Then thereexists an algorithm F in the following normal form: the algorithm runs in 𝑡 = 𝑂 ( ) rounds (for some 𝑡 dependent on 𝑃 ), ittakes a ( 𝑡 + ) -hop 𝑐 -coloring, for 𝑐 = Δ 𝑡 + + , as an input, and outputs the solution to 𝑃 . The proof follows from the speedup theorem of Chang et al. [19]. A similar normal form construction has been usedby Brandt et al. [17].We are now ready to prove the properties of simulation games.Proof of Theorem 5.1. We show that the strategies of agents who have already played will constitute a correctpartial simulation in 𝑁 ′ . Initially this is trivially true, as all agents are assumed to start from the empty strategy.Now assume that (cid:174) 𝑎 is the strategy profile after some number of best responses such that the non-empty strategiesagree on the color of each agent and assume that some agent 𝑣 is scheduled to play. Since the utility is 0 if the agentchooses an action that is not compatible as a coloring, it must choose a compatible coloring. Since we assumed that (cid:174) 𝑎 uho Hirvonen, Laura Schmid, Krishnendu Chatterjee, and Stefan Schmid encodes a partial coloring and there are always enough colors to choose from (i.e. there are Δ 𝑡 + + 𝑣 canalways choose an action that gives it utility 1.After one fair round, each agent 𝑣 has chosen a strategy such that the colorings 𝑟 𝑣 agree on all applicable nodes, andthis is a Nash equilibrium of 𝐺 ′ . The output label 𝜎 𝑣 of each action, by definition, corresponds to the output of F on 𝑁 given the input coloring that the agents have chosen. Now if we construct the correspondence 𝜑 by mapping each actionfrom 𝐴 ′ 𝑣 to 𝐴 𝑣 by matching the output label, we have that if (cid:174) 𝑎 is a Nash equilibrium of 𝐺 ′ , then 𝜑 ( (cid:174) 𝑎 ) = ( 𝜑 ( 𝑎 ) , . . . , 𝜑 ( 𝑎 𝑛 )) is a Nash equilibrium of 𝐺 , as it is the labelling computed by F on 𝑁 . This is by definition a Nash equilibrium of 𝐺 .This establishes properties (P1) and (P2).Finally, it remains to show that 𝐺 ′ can be constructed efficiently in the LOCAL model on the network 𝑁 . This isachieved using a standard approach. First each node 𝑣 gathers its ( 𝑡 + ) -hop neighborhood and outputs its neighborsin 𝑁 ′ . Since the algorithm has access to F , the algorithm in the normal form, it can consider every coloring of 𝐵 ( 𝑣, 𝑡 ) and form the action set 𝐴 ′ 𝑣 . Finally, since the algorithm has access to the ( 𝑡 + ) -neighborhood of 𝑣 in 𝑁 , it can computethe value of the utility function 𝑢 𝑣 for all possible strategies of the neighbors. □ We now illustrate simulation games in the context of a specific graphical game.
In this section we study a simple abstract coordination game called a coloring game [32]. The agents have to choosefrom a fixed set of resources and coordinate their actions so that they don’t use the same resource as their neighbors.Formally, in a 𝑘 -coloring game the actions of agent consist of 𝑘 colors { , , . . . , 𝑘 } . The utility of each agent is 1 if noneighbor chooses the same color and 0 otherwise. The Nash equilibria of the coloring game correspond to (partial)colorings such that two neighbors choose the same color only if both have at least one neighbor of each other color.We will show that it is possible that best responses fail to converge to a Nash equilibrium that corresponds to acoloring even if such a coloring exists and is efficiently computable . This is naturally undesirable behavior, as the agentsfail to solve the underlying coordination task. On the other hand, Theorem 5.1 implies that in this case there exists asimulation game of the coloring game such that the best responses do converge to Nash equilibrium that corresponds toa coloring.We consider the coloring game on two-dimensional 𝑛 -by- 𝑛 torus networks. That is, the set of nodes consists of 𝑣 𝑖,𝑗 : 𝑖, 𝑗 ∈ { , , . . . , 𝑛 − } , there is an edge between nodes 𝑣 𝑖,𝑗 and 𝑣 𝑖,𝑘 if 𝑘 = ( 𝑗 + ) mod 𝑛 , and there is an edge betweennodes 𝑣 𝑖,𝑗 and 𝑣 𝑘,𝑗 if 𝑘 = ( 𝑖 + ) mod 𝑛 . The complexity of 𝑘 -coloring is completely understood in this setting.Theorem 5.4 (Brandt et al. [17]). The complexities of 𝑘 -coloring a two-dimensional torus are as follows.(1) 𝑘 = , Θ ( 𝑛 ) , and(2) 𝑘 = , , . . . : Θ ( log ∗ 𝑛 ) . Both the 2-coloring problem and the 3-coloring problem are global (i.e. require Ω ( 𝑛 ) rounds to compute). ByTheorem 3.2 this implies that best responses also require at least Ω ( 𝑛 ) rounds to converge.In the case of 𝑘 ≥
5, best responses always converge to a proper coloring. This is because each agent has degree 4, sono matter what colors the neighbors choose, each agent can always choose a free color, which is their best response.The interesting case is 𝑘 =
4. Now the property that an agent always has a free color is no longer true: it can be thatall four neighbors have the four different colors. lassifying Convergence Complexity of Nash Equilibria in Graphical Games Using Distributed Computing Theory Fig. 2. Proof of Theorem 5.5. We show an initial coloring configuration on the grid that makes it impossible for best responses toconverge to the proper 4-coloring. Every neighbor of the central node already has neighbors of all possible colors. This removes theincentive to switch.
Theorem 5.5.
There exists an initial strategy profile for every sufficiently large 𝑛 -by- 𝑛 torus such that best responses donot converge to a 4-coloring in the 4-coloring game. Proof. Consider the initial configuration shown in Figure 2. The center node has no choice that would give a propercoloring. No matter which color it chooses, the neighbor with that color already has neighbors of the other colors.Therefore it has no incentive to switch, and the best-responses converge to a suboptimal equilibrium. □ In contrast, by Theorem 5.4 there exists an algorithm for 4-coloring the grid in 𝑂 ( log ∗ 𝑛 ) rounds. By Theorem 5.1this implies that there exists a simulation game where best responses converge to a 4-coloring. This paper introduced a novel approach to classifying the complexity of Nash equilibria in graphical games, and tounderstanding the convergence behavior of best-response and even general local dynamics. By establishing a connectionto the analysis of distributed graph problems, we showed that the Nash equilibria of graphical games correspond to locallyverifiable labelings: solutions to graph problems which are verifiable with constant round algorithms. Impossibility andcomplexity results provably transfer from the distributed setting to the game setting. Thus, we can leverage this togive lower bounds for convergence of best-response dynamics, to quantify the time-constrained inefficiency of bestresponses when convergence is slow or even absent, and to present how these results can be used for implementingmechanisms where best responses converge to a Nash equilibrium that is a solution of the corresponding graph problem.We exemplified our results with some simple and well-known graphical games. Our findings also relate to the openquestion of strategy proof algorithms for reaching equilibria in graphical games, as posed by Kearns in his 2007 survey.We note that in contrast to algorithms like in [31], our discussion indeed pertains to how agents reach equilibria whileplaying the game in a way that is rational with respect to their locally restricted knowledge. This poses an interestingavenue for further research.In this work, we have only considered pure Nash equilibria. However, we note that our approach is much moregeneral than that, since it first and foremost depends on information being locally restricted without making additionalassumptions like the game being a potential game. One can prove similar results as in this work for mixed Nash equilibria, uho Hirvonen, Laura Schmid, Krishnendu Chatterjee, and Stefan Schmid but even for different equilibrium concepts altogether, such as correlated equilibria. The latter correspond to a modelwhere information is local, but can in some sense be exchanged between neighboring nodes, introducing correlatedstrategy distributions. A further simple extension of our model could see different local strategy update dynamics(e.g. ficitious play) at work, instead of restricting analysis to best response dynamics only. This also highlights theconnection of this research direction with evolutionary graph theory, which as a generalized approach to evolutionarydynamics features players with similarly bounded rationality [37]. Furthermore, it is also easily conceivable to analyzea far wider range of graphical games or LVLs with our approach, and to even extend the analysis to infinite graphs.Naturally, there are limitations to our approach. As we mention in Section 2, we assume that graphs are of boundedconstant degree. Graphs of low diameter do not give impossibility results based on information propagation, asinformation can spread quickly. The results for such a setting would look quite different; furthermore, computation ina setting that comes closer to a centralized one where nodes do not have as strongly limited information is not wellunderstood. We see this issue in the fact that the complexity gap for LVLs is a function of the maximum degree. Anotherissue lies with the fact that we only show lower bounds for convergence. This means that for instances that are notefficiently solvable, it could be the case that true convergence is far slower than what our results give.However, it still holds that what we have considered in this paper should only represent a first taste for how powerfulthe connection between game theory and distributed computing is, and we note at this point that interpreting agentsinteracting in games on networks as a distributed system is also highly intuitive. There are many possibilities to harnessthis, and we have only explored a very limited number of them. We hope that this work can serve as a proof of concept,and be the starting point of exciting further research. ACKNOWLEDGMENTS
This work was supported by the European Research Council (ERC) projects CoG 864228 (AdjustNet) and CoG 863818(ForM-SMArt), and the Academy of Finland, grant 314888.
REFERENCES [1] Guillermo Abramson and Marcelo Kuperman. 2001. Social games in a social network.
Physical Review E
63, 3 (2001), 030901.[2] Noga Alon. 2010.
On Constant Time Approximation of Parameters of Bounded Degree Graphs . Springer-Verlag, Berlin, Heidelberg, 234–239.[3] Noga Alon and Nicholas Wormald. 2010.
High Degree Graphs Contain Large-Star Factors . Springer Berlin Heidelberg, Berlin, Heidelberg, 9–21.https://doi.org/10.1007/978-3-642-13580-4_1[4] James Aspnes, Kevin Chang, and Aleksandr Yampolskiy. 2006. Inoculation strategies for victims of viruses and the sum-of-squares partition problem.
J. Comput. System Sci.
72, 6 (2006), 1077–1093.[5] Guy Avni, Thomas A. Henzinger, and Orna Kupferman. 2016. Dynamic Resource Allocation Games. In
Algorithmic Game Theory . Springer BerlinHeidelberg, 153–166.[6] Alkida Balliu, Sebastian Brandt, Yuval Efron, Juho Hirvonen, Yannic Maus, Dennis Olivetti, and Jukka Suomela. 2019. Classification of distributedbinary labeling problems.
CoRR abs/1911.13294 (2019). arXiv:1911.13294 http://arxiv.org/abs/1911.13294[7] Alkida Balliu, Sebastian Brandt, Juho Hirvonen, Dennis Olivetti, Mikaël Rabie, and Jukka Suomela. 2019. Lower bounds for maximal matchings andmaximal independent sets. In
Proc. 60th IEEE Symposium on Foundations of Computer Science (FOCS 2019) . arXiv:1901.02441 .[8] Alkida Balliu, Juho Hirvonen, Janne H Korhonen, Tuomo Lempiäinen, Dennis Olivetti, and Jukka Suomela. 2018. New classes of distributed timecomplexity. In Proc. 50th ACM Symposium on Theory of Computing (STOC 2018) . ACM Press, 1307–1318. https://doi.org/10.1145/3188745.3188860[9] Alkida Balliu, Juho Hirvonen, Christoph Lenzen, Dennis Olivetti, and Jukka Suomela. 2019. Locality of Not-so-Weak Coloring. In
Proc. 26thInternational Colloquium on Structural Information and Communication Complexity (SIROCCO 2019) (Lecture Notes in Computer Science, Vol. 11639) .Springer, 37–51. https://doi.org/10.1007/978-3-030-24922-9_3[10] Leonid Barenboim. 2016. Deterministic ( Δ + 1)-Coloring in Sublinear (in Δ ) Time in Static, Dynamic, and Faulty Networks. J. ACM
63, 5 (2016),47:1–47:22. https://doi.org/10.1145/2979675[11] Kshipra Bhawalkar, Martin Gairing, and Tim Roughgarden. 2014. Weighted congestion games: the price of anarchy, universal worst-case examples,and tightness.
ACM Transactions on Economics and Computation (TEAC)
2, 4 (2014), 1–23.[12] Béla Bollobas. 2004.
Extremal Graph Theory . Dover Publications, Inc., USA.16 lassifying Convergence Complexity of Nash Equilibria in Graphical Games Using Distributed Computing Theory [13] Yann Bramoullé. 2007. Anti-coordination and social interactions.
Games and Economic Behavior
58, 1 (2007), 30–49. https://doi.org/10.1016/j.geb.2005.12.006[14] Yann Bramoullé and Rachel Kranton. 2007. Public goods in networks.
Journal of Economic Theory
Proc. 48th ACM Symposium on Theory of Computing (STOC 2016) . ACM Press, 479–488. https://doi.org/10.1145/2897518.2897570[17] Sebastian Brandt, Juho Hirvonen, Janne H Korhonen, Tuomo Lempiäinen, Patric R J Östergård, Christopher Purcell, Joel Rybicki, Jukka Suomela,and Przemysław Uznański. 2017. LCL problems on grids. In
Proc. 36th ACM Symposium on Principles of Distributed Computing (PODC 2017) . ACMPress, 101–110. https://doi.org/10.1145/3087801.3087833[18] Damien Challet and Y-C Zhang. 1997. Emergence of cooperation and organization in an evolutionary game.
Physica A: Statistical Mechanics and itsApplications
SIAM J. Comput.
48, 1 (2019), 122–143. https://doi.org/10.1137/17M1117537[20] Yi-Jun Chang and Seth Pettie. 2019. A Time Hierarchy Theorem for the LOCAL Model.
SIAM J. Comput.
48, 1 (2019), 33–69. https://doi.org/10.1137/17M1157957[21] Yi-Jun Chang, Qizheng He, Wenzheng Li, Seth Pettie, and Jara Uitto. 2018. The Complexity of Distributed Edge Coloring with Small Palettes.In
Proc. 29th ACM-SIAM Symposium on Discrete Algorithms (SODA 2018) . Society for Industrial and Applied Mathematics, 2633–2652. https://doi.org/10.1137/1.9781611975031.168[22] Todd L Cherry, Stephen J Cotten, and Stephan Kroll. 2013. Heterogeneity, coordination and the provision of best-shot public goods.
ExperimentalEconomics
16, 4 (2013), 497–510.[23] Simon Collet, Pierre Fraigniaud, and Paolo Penna. 2018. Equilibria of games in networks for local tasks. In . Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik.[24] Constantinos Daskalakis and Christos H Papadimitriou. 2006. Computing pure Nash equilibria in graphical games via Markov random fields. In
Proceedings of the 7th ACM Conference on Electronic Commerce . 91–99.[25] Edith Elkind, Leslie Ann Goldberg, and Paul Goldberg. 2006. Nash Equilibria in Graphical Games on Trees Revisited. In
Proceedings of the 7th ACMConference on Electronic Commerce (EC ’06) . Association for Computing Machinery, 100–109. https://doi.org/10.1145/1134707.1134719[26] Joel Friedman. 2003. A proof of Alon’s second eigenvalue conjecture. In
Proc. 35th Annual ACM Symposium on Theory of Computing (STOC 2003) .ACM, 720–724. https://doi.org/10.1145/780542.780646[27] A.M Frieze and T Łuczak. 1992. On the independence and chromatic numbers of random regular graphs.
Journal of Combinatorial Theory, Series B
54, 1 (1992), 123–132. https://doi.org/10.1016/0095-8956(92)90070-E[28] Mohsen Ghaffari and Hsin-Hao Su. 2017. Distributed Degree Splitting, Edge Coloring, and Orientations. In
Proc. 28th ACM-SIAM Symposium onDiscrete Algorithms (SODA 2017) . Society for Industrial and Applied Mathematics, 2505–2523. https://doi.org/10.1137/1.9781611974782.166[29] Matthew O Jackson and Leeat Yariv. 2007. Diffusion of behavior and equilibrium properties in network games.
American Economic Review
97, 2(2007), 92–98.[30] Matthew O Jackson and Yves Zenou. 2015. Games on networks. In
Handbook of game theory with economic applications . Vol. 4. Elsevier, 95–163.[31] Michael Kearns. 2007. Graphical games.
Algorithmic game theory
Science
Proc. 17th Conference in Uncertainty inArtificial Intelligence (UAI 2001) . Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 253–260.[34] Zohar Komarovsky, Vadim Levit, Tal Grinshpoun, and Amnon Meisels. 2015. Efficient Equilibria in a Public Goods Game. In
Proceedings of the 2015IEEE / WIC / ACM International Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT) - Volume 01 (WI-IAT ’15) . IEEE ComputerSociety, 214–219. https://doi.org/10.1109/WI-IAT.2015.91[35] Amos Korman, Shay Kutten, and David Peleg. 2010. Proof labeling schemes.
Distributed Computing
22, 4 (2010), 215–233.[36] Elias Koutsoupias and Christos H. Papadimitriou. 1999. Worst-case Equilibria. In
Proc. 16th Annual Symposium on Theoretical Aspects of ComputerScience (STACS 1999) (Lecture Notes in Computer Science, Vol. 1563) . Springer, 404–413. https://doi.org/10.1007/3-540-49116-3_38[37] E. Lieberman, C. Hauert, and M. A. Nowak. 2005. Evolutionary dynamics on graphs.
Nature
433 (2005), 312–316.[38] Nathan Linial. 1992. Locality in Distributed Graph Algorithms.
SIAM J. Comput.
21, 1 (1992), 193–201. https://doi.org/10.1137/0221015[39] Moni Naor and Larry Stockmeyer. 1995. What Can be Computed Locally?
SIAM J. Comput.
24, 6 (1995), 1259–1277. https://doi.org/10.1137/S0097539793254571[40] Luis E Ortiz and Michael Kearns. 2003. Nash propagation for loopy graphical games.
Advances in neural information processing systems (2003),817–824.[41] Alessandro Panconesi and Romeo Rizzi. 2001. Some simple distributed algorithms for sparse networks.
Distributed Computing
14, 2 (2001), 97–100.https://doi.org/10.1007/PL00008932 17 uho Hirvonen, Laura Schmid, Krishnendu Chatterjee, and Stefan Schmid [42] Christos H Papadimitriou and Tim Roughgarden. 2008. Computing correlated equilibria in multi-player games.
Journal of the ACM (JACM)
55, 3(2008), 1–29.[43] David Peleg. 2000.
Distributed Computing: A Locality-Sensitive Approach . Society for Industrial and Applied Mathematics. https://doi.org/10.1137/1.9780898719772[44] Tim Roughgarden. [n.d.]. Routing games.
Algorithmic game theory
18 ([n. d.]), 459–484.[45] Grant R Schoenebeck and Salil Vadhan. 2012. The computational complexity of Nash equilibria in concisely represented games.
ACM Transactionson Computation Theory (TOCT)
4, 2 (2012), 1–50.[46] Alexander J Stewart, Mohsen Mosleh, Marina Diakonova, Antonio A Arechar, David G Rand, and Joshua B Plotkin. 2019. Information gerrymanderingand undemocratic decisions.
Nature
SIAM J. Comput.
41, 6 (2012), 1769–1786. https://doi.org/10.1137/090773714[48] David Vickrey and Daphne Koller. 2002. Multi-agent algorithms for solving graphical games.
AAAI/IAAI
A PROOF OF THEOREM 3.2
In this section we prove Theorem 3.2. The following lemma states more generally that the execution of best responsedynamics can be simulated for any number of rounds.Lemma A.1.
Assume that (A , 𝑢, 𝑁 ) is a graphical game and (cid:174) 𝑎 is some strategy profile. A distributed algorithm that isgiven (cid:174) 𝑎 as an input can simulate 𝑇 rounds of best responses, for some ordering of the play, in 𝑂 ( log ∗ 𝑛 + 𝑇 ) rounds. Proof. The simulation consists of two phases. In the first phase, the nodes compute a coloring of 𝑁 (the virtualnetwork obtained by connecting all nodes at distance at most 2 in 𝑁 ) with 𝑘 = Δ + 𝑣 chooses a label 𝑐 ( 𝑣 ) from { , , . . . , 𝑘 } such that any two nodes 𝑢 and 𝑣 within distance 2 in 𝑁 have different labels 𝑐 ( 𝑢 ) ≠ 𝑐 ( 𝑣 ) . This can be computed in 𝑂 ( log ∗ 𝑛 ) rounds [10]In the second phase this coloring is treated as a schedule : at round 𝑗 of the second phase each node with color 𝑖 = 𝑗 mod 𝑘 is active, applies the best response to the current strategy profile, and sends its new strategy to its neighbors.The key is that any two nodes updating their strategy at the same time do so independently : since they are not neighbors,their choices do not depend on each other. Therefore applying best responses at all nodes of the same color classis equivalent to letting the corresponding agents play in any sequential order: given an initial strategy profile (cid:174) 𝑎 , allorderings produce the same strategy profile 𝑎 ′ . Simulating all color classes one by one therefore corresponds to some ordering of sequential play.Since there are 𝑘 = Δ + 𝑘 rounds in the LOCALmodel. Since we assume Δ is a constant, simulating 𝑇 rounds of best responses can be done in 𝑂 ( 𝑇 ) rounds. With theinitial coloring step we have that the total running time of the simulation is 𝑂 ( log ∗ 𝑛 + 𝑇 ) , as required. □ It follows that simulating best responses until convergence can be done with an additive 𝑂 ( log ∗ 𝑛 ) overhead.Proof of Theorem 3.2. First, assume that the best responses start from a constant or worst-case initial strategyprofile. Each node can simply choose the same initial value and simulate 𝑇 ( 𝑛 ) rounds of best responses by Lemma A.1.Since we assume that best responses converge for any order of play in 𝑇 ( 𝑛 ) rounds, it follows that 𝑇 ( 𝑛 ) roundsof simulation converge as well. Computing the simulation until convergence takes 𝑂 ( log ∗ 𝑛 + 𝑇 ( 𝑛 )) rounds in thedeterministic LOCAL model.Next, assume that the best responses start from a random initial strategy profile. Now it is no longer possible touse deterministic algorithms to run the simulation. Using the random inputs, each node can choose a random initialstrategy. Then it can simulate best responses for 𝑇 ( 𝑛 ) rounds by Lemma A.1. Since we assume that the best responsesconverge with high probability and the dynamics are deterministic given the initial configuration, the simulation also lassifying Convergence Complexity of Nash Equilibria in Graphical Games Using Distributed Computing Theory converges with high probability in 𝑇 ( 𝑛 ) rounds. The simulation can be computed in 𝑂 ( log ∗ 𝑛 + 𝑇 ( 𝑛 )) rounds in therandomized LOCAL model. □ B GRAPH-THEORETIC ANALYSIS FOR SECTION 4B.1 Graph constructions for the best-shot public goods game
The domination number 𝛾 ( 𝑁 ) of a graph 𝑁 is the size of a minimum dominating set. The independence number 𝛼 ( 𝑁 ) ofa graph 𝑁 is the size of a maximum independent set. A dominating set is perfect if every node not in the set is adjacentto exactly one node in the set (counting the node itself). The girth of a graph is the length of its shortest cycle.The following lemma states that there exist 𝑑 -regular graphs of logarithmic girth such that all dominating sets arelarge and all independent sets are small.Lemma B.1. For each 𝑑 ≥ and each sufficiently large 𝑛 , there exists a 𝑑 -regular graph 𝑁 on 𝑛 ≥ 𝑛 nodes with girth 𝑔 = Ω ( log 𝑑 𝑛 ) such that the following hold.(1) The domination number 𝛾 ( 𝑁 ) is at least ( + 𝜀 )( ln 𝑑 / 𝑑 ) 𝑛 .(2) The independence number 𝛼 ( 𝑁 ) is at most (( + 𝜀 ) ln 𝑑 / 𝑑 ) 𝑛 . Proof. The existence of such graphs relies on properties of random 𝑑 -regular graphs. Random 𝑑 -regular graphshave no large independent sets and no small dominating sets with high probability. These graphs can then be modifiedby the standard cycle cutting technique (see e.g. [2, 12]). The proof follows the proof of Alon [2, Lemma 21].A random 𝑑 -regular graph 𝑁 has independence number 𝛼 at most (( + 𝜀 ) ln 𝑑 / 𝑑 ) 𝑛 [27] and domination numberat least (( + 𝜀 ) ln 𝑑 / 𝑑 ) 𝑛 [2, 3] with high probability. A random 𝑑 -regular graph has only 𝑐 = 𝑂 (√ 𝑛 ) cycles of length 𝑜 ( log 𝑑 𝑛 ) in expectation [2]. Therefore it is possible to find a 𝑑 -regular graph with all three properties.Graph 𝑁 might have short cycles. We will cut these one by one, starting with one of the shortest cycles. Pick an edge 𝑒 = { 𝑢, 𝑣 } on that cycle. Since the graph 𝑁 has maximum degree 𝑑 , there must be an edge 𝑓 = { 𝑢 ′ , 𝑣 ′ } at distance 𝑔 from 𝑒 (we say that an edge 𝑒 is at distance 𝑑 from edge 𝑓 if the minimum distance between a node of 𝑒 and a node of 𝑓 is 𝑑 ). Remove 𝑒 and 𝑓 from 𝐺 and add edges { 𝑢, 𝑣 ′ } and { 𝑢 ′ , 𝑣 } . Since these edges were far away, no short cycles werecreated. Continue in this way until there are no cycles of length less than 𝑔 left – we are guaranteed that this processcontinues if we choose a suitable value for 𝑔 . The graph now has girth Ω ( log 𝑑 𝑛 ) .Now consider any maximum independent set 𝐼 ∗ of 𝑁 ′ . In the worst case each edge that was removed from 𝑁 goesbetween two nodes in 𝐼 ∗ . If we remove one of each such pair of nodes from 𝐼 ∗ , we obtain an independent set 𝐼 of 𝑁 , theoriginal graph. Since exactly two edges were removed for each cycle, we get that | 𝐼 | ≥ | 𝐼 ∗ | − 𝑐 = | 𝐼 ∗ | − 𝑂 (√ 𝑛 ) . Bychoosing a sufficiently large 𝑛 , we have that | 𝐼 ∗ | ≤ (( + 𝜀 ) ln 𝑑 / 𝑑 ) 𝑛 , as required.We can argue similarly about the domination number: if 𝐷 ∗ is a minimum dominating set of 𝑁 ′ , then by adding atmost 2 𝑐 nodes to it we obtain a dominating set of 𝑁 . Since all dominating sets of 𝑁 are large, 𝐷 ∗ must also be large. □ The second lemma states that there exist 𝑑 -regular graphs of logarithmic girth (that is, graphs that look locally thesame as graphs from Lemma B.1), that are bipartite, and that have perfect dominating sets.Lemma B.2. For each 𝑑 ≥ and every 𝑛 , there exists a 𝑑 -regular bipartite graph on 𝑛 ≥ 𝑛 nodes with a perfectdominating set and girth 𝑔 = Ω ( log 𝑑 𝑛 ) . Proof. The proof proceeds in three steps, utilising again the cycle cutting technique. uho Hirvonen, Laura Schmid, Krishnendu Chatterjee, and Stefan Schmid (1) Construct a graph 𝑁 with a perfect dominating set by taking a collection of 𝑘 stars on 𝑑 + 𝑛 = 𝑘 ( 𝑑 + ) ). Let 𝐷 denote the set of centers of the stars and 𝑈 the set of leaves. Find 𝑑 − 𝑈 , and add the corresponding edges to 𝑁 . We call these edges leaf edges .This is always possible for a sufficiently large 𝑘 .(2) Graph 𝑁 might have short cycles. Cut these one by one, starting with one of the shortest cycles. Pick a leaf edge 𝑒 = { 𝑢, 𝑣 } on that cycle. Since the graph 𝑁 has maximum degree 𝑑 , there must be a leaf edge 𝑓 = { 𝑢 ′ , 𝑣 ′ } atdistance 𝑔 from 𝑒 (we say that an edge 𝑒 is at distance 𝑑 from edge 𝑓 if the minimum distance between a nodeof 𝑒 and a node of 𝑓 is 𝑑 ). Remove 𝑒 and 𝑓 from 𝐺 and add edges { 𝑢, 𝑣 ′ } and { 𝑢 ′ , 𝑣 } . Since these edges were faraway, no short cycles were created. Since we crossed two leaf edges, the set 𝐷 is still a perfect dominating set.Continue in this way until there are no cycles of length less than 𝑔 left – we are guaranteed that this processcontinues if we choose a suitable value for 𝑔 . The graph now has girth Ω ( log 𝑑 𝑛 ) .(3) To construct a bipartite graph that retains the properties from the previous steps, we take a bipartite double cover 𝑁 ′ = ( 𝑉 ′ , 𝐸 ′ ) of 𝑁 = ( 𝑉 , 𝐸 ) . For each 𝑣 ∈ 𝑉 we take two copies 𝑣 , 𝑣 ∈ 𝑉 ′ . Denote the first copies by 𝑉 ′ and thesecond copies by 𝑉 ′ . For each edge { 𝑢, 𝑣 } ∈ 𝐸 we add crossed copies { 𝑢 , 𝑣 } and { 𝑢 , 𝑣 } . All edges go betweenthe sets 𝑉 ′ and 𝑉 ′ , so the graph 𝐺 ′ is bipartite. By construction we do not create any new short cycles, and since | 𝑉 ′ | = | 𝑉 | we have that the girth of 𝐺 ′ is also Ω ( log 𝑑 𝑛 ) . Finally, the copies of nodes corresponding to the set 𝐷 form a perfect dominating set in 𝑁 ′ . □ B.2 Graph constructions for the minority game
We first construct a graph that has a cut that cuts all edges and has high girth.Lemma B.3.
For every 𝑑 ≥ and every sufficiently large 𝑛 , there exists a 𝑑 -regular bipartite graph on 𝑛 ≥ 𝑛 nodeswith girth 𝑔 = Ω ( log 𝑑 𝑛 ) . Proof of Lemma B.4. Friedman [26] showed that the second largest eigenvalue, in absolute value, of the adjacencymatrix of a random 𝑑 -regular graph is at most 2 √ 𝑑 − + 𝜀 , for any 𝑑 ≥ 𝜀 >
0, with high probability. Let 𝜆 𝑛 denote the smallest eigenvalue. The size of the maximum cut of a graph is known to be bounded by (cid:16) + 𝑑 | 𝜆 𝑛 | (cid:17) | 𝐸 | (see e.g. Trevisan [47]). This implies that with high probability, the maximum cut of a random 𝑑 -regular graph has atmost a 1 / + √ 𝑑 − / 𝑑 + 𝜀 /( 𝑑 ) -fraction of the edges.Now we apply the cycle cutting technique once again. Find a 𝑑 -regular graph 𝑁 that has no large cuts and 𝑂 (√ 𝑛 ) cycles of length less than 𝑔 = Ω ( log 𝑑 𝑛 ) . Repeatedly cut each cycle of length less than 𝑔 to obtain a new graph 𝑁 ′ .Now consider any maximum cut 𝐶 of 𝑁 ′ . In the worst case, all edges that were removed from 𝑁 were cut edges of 𝐶 , and the new edges are no longer cut edges. Since we removed 𝑂 (√ 𝑛 ) edges, the size of the cut in 𝑁 ′ is at most ( / + √ 𝑑 − / 𝑑 + 𝜀 /( 𝑑 ))| 𝐸 | − 𝑂 (√ 𝑛 ) . For sufficiently large 𝑛 , this is at most ( / + /√ 𝑑 − )| 𝐸 | . □ Next we construct another graph that is locally indistinguishable from the first graph, but has no large cuts. lassifying Convergence Complexity of Nash Equilibria in Graphical Games Using Distributed Computing Theory Lemma B.4.
For every 𝑑 ≥ and every sufficiently large 𝑛 , there exists a 𝑑 -regular graph on 𝑛 ≥ 𝑛 nodes with girth 𝑔 = Ω ( log 𝑑 𝑛 ) such that the maximum cut is of size at most ( / + /√ 𝑑 − )| 𝐸 | . Proof. Take any bipartite 𝑑 -regular graph on 𝑛 = 𝑘 ≥ 𝑛 nodes. Then use the cycle cutting technique fromLemmas B.2 and B.1. Cut cycles until the graph has girth 𝑔 = Ω ( log 𝑑 𝑛 ) . When connecting edges make sure that theystill go between the two sides of the bipartition. □ B.3 Proof of Lemma 4.2
Proof of Lemma 4.2. We will start by showing that no fast distributed algorithm can find large independent sets orsmall dominating sets on the networks given by Lemma B.2. The proof is by a standard indistinguishability argument:we show that since algorithms cannot locally distinguish between a network of size 𝑛 from Lemma B.2 and a networkof size 𝑛 from Lemma B.1, it must locally behave the same on both networks.Fix 𝑇 ( 𝑛 ) to be some function in 𝑜 ( log 𝑑 𝑛 ) . Let 𝑁 and 𝑁 ′ be sufficiently large graphs from Lemma B.2 and Lemma B.1,respectively, both with girth 𝑔 > 𝑇 ( 𝑛 ) +
1. Since 𝑇 ( 𝑛 ) = 𝑜 ( log 𝑑 𝑛 ) , such graphs must exist for a sufficiently large 𝑛 .Both graphs are 𝑑 -regular, and therefore any distributed algorithm running in time 𝑇 ( 𝑛 ) sees locally a 𝑑 -regular treeat every node of both graphs. That is, the graphs are locally indistinguishable from each other.Since 𝑁 ′ does not have any large independent sets nor small dominating sets, any algorithm must output anindependent set of expected size at most 𝛼 ( 𝑁 ′ ) or a dominating set of expected size at least 𝛾 ( 𝑁 ′ ) . Therefore everynode must have the same output distribution over the assignment of the random bits under any 𝑇 ( 𝑛 ) -round algorithm.This implies that the algorithm must put the node into an independent set with probability at most 𝛼 ( 𝑁 ′ )/ 𝑛 and into adominating set with probability at least 𝛾 ( 𝑁 ′ )/ 𝑛 on 𝑁 ′ . Since all neighborhoods of 𝑁 look indistinguishable from theneighborhoods of 𝑁 ′ , any algorithm with running time 𝑇 ( 𝑛 ) must behave in exactly the same way on 𝑁 as on 𝑁 ′ : theexpected size of an independent set is at most 𝛼 ( 𝑁 ′ ) and the expected size of a dominating set is at least 𝛾 ( 𝑁 ′ ) . □□