Asymmetric evolutionary games
aa r X i v : . [ q - b i o . P E ] J un ASYMMETRIC EVOLUTIONARY GAMES
ALEX MCAVOY AND CHRISTOPH HAUERT
Abstract.
Evolutionary game theory is a powerful framework for studying evolution in populations of in-teracting individuals. A common assumption in evolutionary game theory is that interactions are symmetric,which means that the players are distinguished by only their strategies. In nature, however, the microscopicinteractions between players are nearly always asymmetric due to environmental effects, differing baselinecharacteristics, and other possible sources of heterogeneity. To model these phenomena, we introduce intoevolutionary game theory two broad classes of asymmetric interactions: ecological and genotypic. Ecolog-ical asymmetry results from variation in the environments of the players, while genotypic asymmetry is aconsequence of the players having differing baseline genotypes. We develop a theory of these forms of asym-metry for games in structured populations and use the classical social dilemmas, the Prisoner’s Dilemma andthe Snowdrift Game, for illustrations. Interestingly, asymmetric games reveal essential differences betweenmodels of genetic evolution based on reproduction and models of cultural evolution based on imitation thatare not apparent in symmetric games. Introduction
Evolutionary game theory has been used extensively to study the evolution of cooperation in socialdilemmas (Ohtsuki et al., 2006; Nowak, 2006a; Taylor et al., 2007). A social dilemma is typically modeledas a game with two strategies, cooperate ( C ) and defect ( D ), whose payoffs for pairwise interactions aredefined by a matrix of the form C DC R, R S, TD T, S P, P ! (1)(Maynard Smith, 1982; Hofbauer and Sigmund, 1998). For a focal player using a strategy on the left-handside of this matrix against an opponent using a strategy on the top of the matrix, the first (resp. second)coordinate of the corresponding entry of this matrix is the payoff to the focal player (resp. opponent). Thatis, a cooperator receives R when facing another cooperator and S when facing a defector; a defector receives T when facing a cooperator and P when facing another defector. Since the same argument applies to theopponent, the game defined by (1) is symmetric . If defection pays more than cooperation when the opponentis a cooperator ( T > R ), but the payoff for mutual cooperation is greater than the payoff for mutual defection(
R > P ), then a social dilemma (Dawes, 1980; Hauert et al., 2006) arises from this game due to the conflictof interest between the individual and the group (or pair). The nature of this social dilemma depends on theordering of R , S , T , and P . Biologically, the most important rankings are given by the Prisoner’s Dilemma( T > R > P > S ) and the Snowdrift Game (
T > R > S > P ) (Maynard Smith, 1982; Hauert and Doebeli,2004; Doebeli and Hauert, 2005; Hauert et al., 2006; Voelkl, 2010).Since matrix (1) defines a symmetric game, any two players using the same strategy are indistinguish-able for the purpose of calculating payoffs. In nature, however, asymmetry frequently arises in interspeciesinteractions such as parasitic or symbiotic relationships (Maynard Smith, 1982). Interactions between sub-populations, such as in Dawkins’ Battle of the Sexes Game (Dawkins, 1976; Schuster and Sigmund, 1981;Maynard Smith and Hofbauer, 1987; Hofbauer, 1996), also give rise to asymmetry that cannot be modeledby the symmetric matrix (1). Even intraspecies interactions are essentially always asymmetric: (i) pheno-typic variations such as size, strength, speed, wealth, or intellectual capabilities; (ii) differences in access toand availability of environmental resources; or (iii) each individual’s history of past interactions, all affect theinteracting individuals differently and result in asymmetric payoffs. The winner-loser effect, for example, is awell-studied example of effects of previous encounters on future interactions and has been reported across taxa(Dugatkin, 1997; Maynard Smith, 1982), including even mollusks (Wright and Shanks, 1993; Shanks, 2002). symmetry may also result from the assignment of social roles (Selten, 1980; Hammerstein, 1981; Ohtsuki,2010), such as the roles of “parent” and “offspring” (Marshall, 2009): cooperation may be tied to individualenergy or strength, for example, which is, in turn, determined by a player’s role. In the realm of continuousstrategies, adaptive dynamics has been used to study asymmetric competition, which applies to the resourceconsumption of plants, for instance (Weiner, 1990; Freckleton and Watkinson, 2001; Doebeli and Ispolatov,2012). In social dilemmas containing many cooperators, accumulated benefits may be synergistically en-hanced (or discounted) in a way that depends on who or where the players are (Hauert et al., 2006), therebymaking larger group interactions asymmetric. To model such interactions using evolutionary game theory,the payoff matrix must reflect the asymmetry.In the Donation Game , a cooperator pays a cost, c , to deliver a benefit, b , to the opponent, while adefector pays no cost and provides no benefit (Sigmund, 2010). In terms of matrix (1), this game satisfies R = b − c , S = − c , T = b , and P = 0. Provided b and c are positive, mutual defection is the only Nashequilibrium. If b > c , then this game defines a Prisoner’s Dilemma. Perhaps the simplest way to modifythis game to account for possible sources of asymmetry is to allow for each pair of players to have a distinctpayoff matrix; that is, the payoff matrix for player i against player j in the Donation Game is M ij := C DC b j − c i , b i − c j − c i , b i D b j , − c j , ! (2)for some b i , b j , c i , and c j . If player i cooperates, then this player donates b i to his or her opponent andincurs a cost of c i for doing so. As before, defectors provide no benefit and pay no cost. The index i couldrefer to a baseline trait of the player, the player’s location, his or her history of past interactions, motivation(Bergman et al., 2010), or any other non-strategy characteristic that distinguishes one player from another.Games based on matrices of the form (2), with payoffs for both players in each entry of the matrix, aresometimes called bimatrix games. Although bimatrix games have appeared in the context of evolution-ary dynamics (Hofbauer, 1996; Hofbauer and Sigmund, 2003; Ohtsuki, 2010), most of the focus on thesegames has been in the setting of classical game theory and economics (see Fudenberg and Tirole, 1991)where “matrix game” generally means “bimatrix game.” Bimatrix games may be used to model classicalasymmetric interactions such as those arising from sexual asymmetry in the Battle of the Sexes Game(Magurran and Nowak, 1991). The asymmetric, four-strategy Hawk-Dove Game of (Maynard Smith, 1982)consisting of the strategies Hawk, Dove, Bourgeois, and anti-Bourgeois may also be framed as a (4 × ecological and genotypic . Ecological asymmetry is derivedfrom the locations of the players, whereas genotypic asymmetry is based on the players themselves. Withecological asymmetry, M ij is the payoff matrix for a player at location i against a player at location j . Sincethe payoffs depend on the locations of the players, this form of asymmetry requires a structured population.Ecological asymmetry is a natural consideration in evolutionary dynamics since it ties strategy success tothe environment. In the Donation Game, for instance, cooperators might be donating goods or services, butthe costs and benefits may depend on the environmental conditions, i.e. the location of the donor.On the other hand, players might instead differ in ability or strength, and “strong” cooperators mightcontribute greater benefits (or incur lower costs) than “weak” cooperators. This variation results in genotypicasymmetry, where each player has a baseline genotype (strength) and a strategy ( C or D ). This form ofasymmetry turns out to be subtler than it seems at first glance, however, since genotypes are generallyrepresented by strategies in evolutionary game theory (Maynard Smith, 1982; Dugatkin, 2000). In particular,it might seem that the genotype and strategy of a player could be combined into a single composite strategyand that the symmetric game based on these composite strategies could replace the original asymmetricgame. As it happens, whether genotypic asymmetry can be resolved by a symmetric game depends on thedetails of the evolutionary process.Classically, evolutionary games were studied in infinite populations via replicator dynamics (Taylor and Jonker,1978), and more recently these games have been considered in finite populations (Nowak et al., 2004;Taylor et al., 2004). Because every biological population is finite, we focus on finite populations (which, or technical reasons, we assume to be large). Since ecological asymmetry requires distinguishing differentlocations within the population, we assume that the population is structured and that a network defines thestructure. Network-structured populations have received a considerable amount of attention in evolution-ary game theory and provide a natural setting in which to study social dilemmas (Lieberman et al., 2005;Ohtsuki et al., 2006; Ohtsuki and Nowak, 2006; Taylor et al., 2007; Szab´o and F´ath, 2007; D´ebarre et al.,2014). Compared to well-mixed populations, in which each player interacts with every other player, networkscan restrict the interactions that occur within the population by specifying which players are “neighbors,”i.e. share a link. We represent the links among the N players in the population using an adjacency matrix,( w ij ) i,j N , which is defined by letting w ij = 1 if there is a link from vertex i to vertex j and 0 otherwise(and satisfies w ij = w ji for each i and j ).In an evolutionary game, the state of a population of players is defined by specifying the strategy of eachplayer. Each player interacts with all of his or her neighbors. The total payoff to a player is multiplied bya selection intensity, β >
0, and then converted into fitness (see Methods). Once each player is assigned afitness, an update rule is used to determine the state of the population at the next time step (Nowak, 2006b).For example, with a birth-death update rule, a player is chosen from the population for reproduction withprobability proportional to relative fitness. A neighbor of the reproducing player is then randomly chosenfor death, and the offspring, who inherits the strategy of the parent, fills the vacancy. This process is amodification of the Moran process (Moran, 1958), adapted to allow for (i) frequency-dependent fitnesses and(ii) population structures that are not necessarily well-mixed. The order of birth and death could also bereversed to get a death-birth update rule (Ohtsuki et al., 2006). In this rule, death occurs at random andthe neighbors of the deceased compete to reproduce in order to fill the vacancy. These two rules result inthe update of a single strategy in each time step, but one could consider other rules, such as Wright-Fisherupdating, in which all of the strategies are revised in each generation (Imhof and Nowak, 2006). The rulesmentioned to this point define strategy updates via reproduction and inheritance; as such, we refer to themas genetic update rules.Another popular class of update rules is based on revisions to the existing players’ strategy choices.We refer to rules falling into this class as cultural update rules. Examples include imitation updating, inwhich a player is selected at random to evaluate his or her strategy and then probabilistically compares thisstrategy to those of his or her neighbors (Ohtsuki et al., 2006). A more localized version of this update ruleis known as pairwise comparison updating, in which a player chooses a random neighbor for comparisonrather than looking at the entire neighborhood (Szab´o and T˝oke, 1998; Traulsen et al., 2007). Under bestresponse dynamics , an individual adopts the strategy that performs best given the current strategies of hisor her neighbors (Ellison, 1993). In each of these cultural processes, the strategy of a player can change, butthe underlying genotype is always the same, which suggests that baseline genotype and strategy need to betreated separately.Genotypic asymmetry needs to be handled more carefully if the update rule is genetic since the nature ofgenotype transmission affects the dynamics of the process. In contrast to cultural processes, the genotype and strategy of a player at a given location may both change if the update rule is genetic: genotype maybe inherited but not imitated. We will see that this property results in cultural and genetic processesbehaving completely differently in the presence of genotypic asymmetry.
Phenotype may have both geneticand environmental components (Mahner and Kary, 1997; Baye et al., 2011), and after treating the genetic(genotypic) and environmental components separately, these two forms of asymmetry may be combined inorder to get a model in which the asymmetry is derived from varying baseline phenotypes. Thus, witha theory of both ecological asymmetry and genotypic asymmetry based on inherited genotypes, one canaccount for more complicated forms of asymmetry appearing in biological populations.2.
Results
Ecological asymmetry.
Here we develop a framework for ecologically asymmetric games in which thepayoffs depend on the locations of the players as well as their strategies. We assume that all of the playershave the same set of strategies (or “actions”) available to them, { A , . . . , A n } . The payoff matrix for a player t vertex i against a player at vertex j is M ij = A A · · · A n A a ij , a ji a ij , a ji · · · a ij n , a jin A a ij , a ji a ij , a ji · · · a ij n , a jin ... ... ... . . . ... A n a ijn , a ji n a ijn , a ji n · · · a ijnn , a jinn . (3)That is, a player at vertex i using strategy A r against an opponent at vertex j using strategy A s realizes apayoff of a ijrs , whereas his opponent receives a jisr . Since a ijrs depends on i and j , these payoff matrices capturethe asymmetry of the game.In the simpler setting of symmetric games, the pair approximation method has been used successfullyto describe the dynamics of evolutionary processes on networks (Matsuda et al., 1992; Bollob´as, 2001;Ohtsuki et al., 2006; Vukov et al., 2006; Ohtsuki and Nowak, 2006). For each r ∈ { , . . . , n } , this methodapproximates the frequency of strategy A r , which we denote by p r , using the frequencies of strategy pairs inthe population. Pair approximation is expected to be accurate on large random regular networks (Bollob´as,2001; Ohtsuki et al., 2006), so we assume that the network is regular (of degree k >
2) and that N is suf-ficiently large. (For k = 2, the network is just a cycle, which we do not treat here.) We also take β ≪ weak , which results in a separation of timescales: the local configurations equi-librate quickly, while the global strategy frequencies change much more slowly. This separation allows usto get an explicit expression for the expected change, E [∆ p r ], in the frequency of strategy A r for each r .Incidentally, weak selection happens to be quite reasonable from a biological perspective since each trait isexpected to have only a small effect on the overall fitness of a player (Wu et al., 2010; Tarnita et al., 2011;Wu et al., 2013).Interestingly, for two genetic and two cultural update rules, weak selection reduces ecological asymmetryto a symmetric game derived from the spatial average of the payoff matrices: Theorem 1.
In the limit of weak selection, the dynamics of the ecologically asymmetric death-birth, birth-death, imitation, and pairwise comparison processes on a large, regular network may be approximated bythe dynamics of a symmetric game with the same update rule and payoff matrix M := kN P Ni,j =1 w ij M ij ,i.e. M = A A · · · A n A a , a a , a · · · a n , a n A a , a a , a · · · a n , a n ... ... ... . . . ... A n a n , a n a n , a n · · · a nn , a nn , (4)where a st := kN P Ni,j =1 w ij a ijst for each s and t .For a proof of Theorem 1, see Methods. In Methods, we derive explicit formulas for E [∆ p r ] for each r (where p r is the frequency of strategy A r and E [∆ p r ] is the expected change in p r in one step of the process)and show that these expectations depend on M in the limit of weak selection. If we choose an appropriatetime scale and make the approximation ˙ p r := dp r dt = E [∆ p r ]∆ t , (5)then the dynamics of an ecologically asymmetric process may also be described in terms of the replicatorequation (on graphs) of Ohtsuki and Nowak (2006): If φ := P ns,t =1 p s p t a st , then˙ p r = p r n X s =1 p s (cid:0) a rs + b rs (cid:1) − φ ! , (6) here b rs is a function of M , k , and the update rule. (For each of the four processes, the explicit expressionfor b rs is provided in Methods.) The matrix (cid:0) b rs (cid:1) nr,s =1 accounts for local competition resulting from thepopulation structure (see Ohtsuki and Nowak, 2006). In particular, the Ohtsuki-Nowak transform,( a rs ) nr,s =1 −→ (cid:0) a rs + b rs (cid:1) nr,s =1 , (7)which transforms the classical replicator equation into the replicator equation on graphs, also applies toevolutionary games with ecological asymmetry.Even though interactions are now governed by a symmetric game, Theorem 1 states that, in general,the dynamics depend on the particular network configuration, ( w ij ) i,j N ; that is, the symmetric payoffsdefined by M still depend on the network structure, or, equivalently, on the distribution of ecological resourceswithin the population. However, somewhat surprisingly, there is a broad class of games for which thisdependence vanishes: Definition 1. If a ijrs = x irs + y jrs for each r and s , then M ij is called a spatially additive payoff matrix. If M ij is spatially additive for each i and j , then the game is said to be spatially additive.A game is spatially additive if the payoff for an interaction between any two members of the population canbe decomposed as a sum of two components, one from each player’s location. Note that spatial additivity isdifferent from the “equal gains from switching” property (Nowak and Sigmund, 1990) in that neither impliesthe other. However, spatial additivity is an analogue in the following sense: if two players at differentlocations use the same strategy against a common opponent, then the difference in these two players’ payoffsfor this interaction is independent of the location of the opponent. Interchanging “location” and “strategy,”one obtains the equal gains from switching property. The importance of spatially additive games is due tothe following corollary to Theorem 1: Corollary 1. If M ij is spatially additive for each i and j , then the expected change in the frequency ofstrategy A r , E [∆ p r ], is independent of ( w ij ) i,j N for each r . In particular, the dynamics of the processdo not depend on the particular network configuration.As an example, the asymmetric Donation Game is spatially additive and possesses the equal gains fromswitching property, which greatly simplifies the analysis of its dynamics: Example 1 (Donation Game with ecological asymmetry) . The asymmetric Donation Game with payoffmatrices defined by Eq. (2) is spatially additive and satisfies M = C DC b − c, b − c − c, bD b, − c , ! , (8)where b = N P Ni =1 b i and c = N P Ni =1 c i . Therefore, the dynamics of the asymmetric game are the sameas those of its symmetric counterpart with benefit, b , and cost, c , regardless of network configuration orresource distribution. Under death-birth (resp. imitation) updating, this result implies that cooperationis expected to increase if and only if b/c > k (resp. b/c > k + 2), where k is the degree of the (regular)network (Ohtsuki et al., 2006). Fig. 1(A) compares the predicted result obtained from M to simulation datafor imitation updating when benefit and cost values are distributed according to Gaussian random variables. Example 2 (Snowdrift Game with ecological asymmetry) . In order to illustrate when Corollary 1 fails, weturn to cooperation in the Snowdrift Game (Hauert and Doebeli, 2004; Doebeli and Hauert, 2005). In thisgame, two drivers find themselves on either side of a snowdrift. If both cooperate in clearing the snowdrift,they share the cost, c , equally, and both receive the benefit of being able to pass, b . If one player cooperatesand the other defects, both players receive b but the cooperator pays the full cost, c . If both players defect,each receives no benefit and pays no cost. In order to incorporate ecological asymmetry, we assume that thebenefits are all the same since they are derived from being able to pass in the absence of a snowdrift. On theother hand, the cost a player pays to clear the snowdrift may depend on his or her location: the snowdriftmay appear on an incline, for example, in which case one player shovels with the gradient and the other layer against it. Moreover, when two cooperators meet, they might clear unequal shares of the snowdrift.Thus, the payoff matrix for a player at location i against a player at location j should be of the form M ij ( α ij ) := C DC b − α ij c i , b − α ji c j b − c i , bD b, b − c j , ! , (9)where 0 α ij α ij + α ji = 1 (Du et al., 2009). Intuitively, when two cooperators face one other,they each begin to clear the snowdrift and stop once they meet; the quantity α ij indicates the fraction ofthe snowdrift a cooperator at location i clears before meeting the cooperator at location j . A natural choicefor α ij is α ij = c j c i + c j , (10)which is the unique value that gives α ij c i = α ji c j for each i and j , ensuring that the game is fair , i.e. thatthe cooperator with the higher cost clears a smaller portion of the snowdrift than the one with the lowercost. Averaging the payoff to one cooperator against another over all possible locations gives1 kN N X i,j =1 w ij ( b − α ij c i ) = b − kN N X i,j =1 w ij (cid:18) c i c j c i + c j (cid:19) , (11)which is the upper-left entry of M . In contrast, the remaining three entries of M do not depend on( w ij ) i,j N . Therefore, provided there are at least two locations with distinct cost values, the dynam-ics of an evolutionary process depend on the particular network configuration (Theorem 1). This networkdependence is illustrated in Fig. 2.Suppose now that we set α ij ≡ / c and c , with c < b < c < b , then a player who incurs a cost of c finds it beneficial tocooperate against a defector, but a player who incurs a cost of c would rather defect in this situation.Thus, based on the social dilemma implied by the ranking of the payoffs, a player who incurs a cost of c for cooperating is always playing a Snowdrift Game while a player who incurs a cost of c is always playinga Prisoner’s Dilemma. It follows that ecological asymmetry can account for multiple social dilemmas beingplayed within a single population, even if the players all use the same set of strategies ( C and D ). The payoffmatrices of this particular game are spatially additive, so, by Corollary 1, the dynamics do not depend onthe network configuration. If q is the fraction of vertices with cost value c then c = qc + (1 − q ) c is theaverage cost of cooperation for a particular location and the dynamics are the same as those of the symmetricSnowdrift Game in which the cost of clearing a snowdrift is c (see Fig. 1(B)). Fig. 3 demonstrates that thisresult does not extend to stronger selection strengths, so Theorem 1 is unique to weak selection.Based on Theorem 1 and the relative rank of payoffs, the social dilemma defined by the asymmetricgame (9) (for general α ij ) is a Prisoner’s Dilemma if b < c and a Snowdrift Game if b > c when selectionis weak. That is, microscopically, there is a mixture of Prisoner’s Dilemmas and Snowdrift Games, but,macroscopically, the process behaves like just one of these social dilemmas. Consequently, although thedynamics of this evolutionary process may depend on the network configuration, the type of social dilemmaimplied by this game does not.2.2. Genotypic asymmetry.
Another form of asymmetry is based on the genotypes of the players ratherthan their locations. Each player in the population has one of ℓ possible genotypes, and these genotypes areenumerated by the set { , . . . , ℓ } . For an n -strategy game, the payoff matrix for a player whose genotype is C ∆ p C × -4 -1012345678 (A) actualpredicted p C ∆ p C × -3 -2-101234567 (B) c = 34/13c = 70/13half c / half c predicted Figure 1.
Average change in the frequency of cooperators, ∆ p C , as a function of thefrequency of cooperators, p C , in (A) an asymmetric Donation Game and (B) asymmetricSnowdrift Games. The update rules are (A) imitation and (B) death-birth, and each processhas for a selection intensity β = 0 .
01. In both figures, the network is a random regular graphof size N = 500 and degree k = 3. In (A), benefits and costs of cooperation vary acrossvertices according to a Gaussian distribution with mean 3 .
5, variance 1 . .
5, variance 0 .
25 for costs. In (B), the benefit is b = 5 . c = 34 /
13, or high c = 70 /
13, which actually recovers the payoff rankingof the Prisoner’s Dilemma because c > b . The costs are the same for all vertices ( c , blueand c , green) or mixed at equal proportions (red). (B) confirms that the average changein cooperators in the mixed Snowdrift Game/Prisoner’s Dilemma (red) may be obtained byaveraging these changes for the Snowdrift Game (blue) and the Prisoner’s Dilemma (green).The small, systematic deviations between simulation data and analytical predictions (solidlines) are explained in Methods (where it is also shown that ∆ p C is linear in β for β ≪ u against a player whose genotype is v is M uv := A A · · · A n A a uv , a vu a uv , a vu · · · a uv n , a vun A a uv , a vu a uv , a vu · · · a uv n , a vun ... ... ... . . . ... A n a uvn , a vu n a uvn , a vu n · · · a uvnn , a vunn . (12)We explore genotypic asymmetry for cultural and genetic processes separately:2.2.1. Cultural updating.
If genotypic asymmetry is incorporated into a cultural process, then the genotypesof the players never change; only the strategies of the players are updated. In a structured population, itfollows that each player’s genotype may be associated with his or her location, and this association is aninvariant of the process. Thus, if u ( i ) denotes the genotype of the player at location i , then we may applyTheorem 1 to the matrices defined by M ij = M u ( i ) u ( j ) for each i and j . In this sense, genotypic asymmetrymay be “reduced” to ecological asymmetry in evolutionary games with cultural update rules. Note that,unlike ecological asymmetry, genotypic asymmetry does not require a structured population. However, onecan always think of a population as structured (even in the well-mixed case), and doing so allows one tomake sense of the “locations” of the players and to apply Theorem 1 to cultural processes with genotypicasymmetry. C ∆ p C × -4 -8-6-4-202468 Figure 2.
Average change in the frequency of cooperators, ∆ p C , as a function of thefrequency of cooperators, p C , for a spatially non-additive Snowdrift Game, Eq. (9), withselection intensity β = 0 .
01. The blue and green data are obtained using pairwise comparisonupdating and differ only in the configuration of the underlying network, which in both casesis a random regular graph of size N = 500 and degree k = 3. Every vertex has a benefit valueof b = 4 .
0, and the cost values are split equally, with half of the vertices having c = 0 . c = 5 .
5. The average payoff for mutual cooperation, Eq. (11),is 3 .
069 (blue) and 2 .
961 (green), which suggests that the former arrangement is moreattractive for cooperation. The analytical predictions (solid lines) are obtained from Eq.(48) in Methods (and are linear in β for β ≪ p C ∆ p C -0.03-0.02-0.0100.010.020.030.040.050.060.07 (A) c = 34/13c = 70/13half c / half c average of • and • predicted p C ∆ p C -0.1-0.0500.050.10.150.2 (B) c = 34/13c = 70/13half c / half c average of • and • Figure 3.
The Snowdrift Games of Fig. 1(B) with the stronger selection strengths β = 0 . β = 0 . b = 5 . c , c ,and half c /half c , respectively), the simulation results differ from the prediction of pairapproximation already for β = 0 . β = 0 .
5, (B) makes it clear thatTheorem 1 no longer holds since the average change in cooperators in the game with mixedcosts (red) differs from the average (grey) of these changes for the games with costs c only(blue) and c only (green). Thus, Theorem 1 is peculiar to weak selection. xample 3 (Donation Game with genotypic asymmetry and cultural updating) . In the Donation Game,a cooperator of genotype u donates b u at a cost of c u . Defectors contribute no benefit and pay no cost,irrespective of genotype. Consider imitation updating on a large, regular network of degree k , and let u ( i )denote the genotype of the player at location i (henceforth “player i ”). Suppose that player i is a cooperator,player j is a defector, and that player i imitates player j and becomes a cooperator. Despite this strategychange, the genotype of player i is still u ( i ), and the payoff matrix for player i against player j is still M u ( i ) u ( j ) . On the other hand, consider the same process but with the genotypic asymmetry replaced byecological asymmetry (and with M ij := M u ( i ) u ( j ) as the payoff matrix for the player at location i against theplayer at location j ). Since the genotype of a player at a given location never changes in an imitation process,the process with ecological asymmetry is well-defined; that is, M ij is independent of the dynamics of theprocess for each i and j . Therefore, we may instead study the evolution of cooperation in the process withecological asymmetry, and we already know from Example 1 that, in the limit of weak selection, the frequencyof cooperators in this Donation Game is expected to increase if and only if ( k + 2) P Ni =1 c u ( i ) < P Ni =1 b u ( i ) .In contrast, for genetic update rules, the asymmetry present due to differing genotypes can be removedcompletely if the genotypes of offspring are determined by genetic inheritance:2.2.2. Genetic updating.
Genetic update rules are defined by the ability of players to propagate their offspringto other locations in the population by means of births and deaths. In other words, there is a reproductivestep in which genetic information is passed from parent(s) to child. Both the death-birth and birth-deathprocesses have genetic update rules, but reproduction need not be clonal for the update rule to be genetic.If the genotypes of offspring are determined by genetic inheritance, then the strategy and genotype at eachlocation are updated simultaneously: if the offspring of a player whose genotype is u and whose strategy is A r replaces a player whose genotype is v and whose strategy is A s , then v is updated to u and A s is updatedto A r synchronously. Therefore, rather than treating genotypes and strategies separately, we may considerthem together in the form of pairs, ( u, A r ), linking genotype and strategy. These pairs may be thought ofas composite strategies of a larger evolutionary game whose payoff matrix, f M , is defined by f M ( u,A r ) , ( v,A s ) := a uvrs (13)for genotypes, u and v , and strategies, A r and A s . The map n M uv o ℓu,v =1 −→ f M (14)resolves a collection of n × n asymmetric payoff matrices with a single symmetric payoff matrix, f M , ofsize ℓn × ℓn . This argument holds for any population structure, so evolutionary processes with genotypicasymmetry that are based on genetic update rules can be studied in any setting in which there is a theory ofsymmetric games. For example, we may use the results from pair approximation on large, regular networksto study the Donation Game with genotypic asymmetry and genetic updating: Example 4 (Donation Game with genotypic asymmetry and genetic updating) . As in Example 3, a coop-erator of genotype u in the Donation Game donates b u at a cost of c u . Defectors contribute no benefit andpay no cost, irrespective of genotype. For the death-birth and birth-death update rules, defectors may bemodeled as cooperators whose benefit and costs are both 0. In the larger symmetric game defined by (14),it follows that there are ℓ + 1 distinct composite strategies: (1 , C ), (2 , C ), . . . , ( ℓ, C ), and D := ( ℓ + 1 , C ).For death-birth updating on a large, regular network of degree k , cooperators of genotype u ∈ { , . . . , ℓ } areexpected to increase if and only if k c u − ℓ X v =1 c v p v ! < b u − ℓ X v =1 b v p v , (15)where, for each v ∈ { , . . . , ℓ } , p v denotes the frequency of cooperators of genotype v (i.e. the frequency ofstrategy ( v, C ) in the larger symmetric game). The terms P ℓv =1 b v p v and P ℓv =1 c v p v are the average popu-lation benefit and cost values, respectively. Therefore, the condition for the expected increase in cooperatorsof a particular genotype depends on the average level of cooperation within the population. Eq. (15) may e thought of as an analogue of the ‘ b/c > k ’ rule of Ohtsuki et al. (2006) with b replaced by the “benefitpremium,” b u − P ℓv =1 b v p v , and c replaced by the “cost premium,” c u − P ℓv =1 c v p v .In the birth-death process, on the other hand, cooperators of genotype u ∈ { , . . . , ℓ } are expected toincrease if and only if c u < ℓ X v =1 c v p v . (16)Interestingly, this condition is independent of the benefit values and says that cooperators of genotype u ∈ { , . . . , ℓ } increase in abundance if they incur, on average, smaller costs for cooperating than the othercooperators.Eqs. (15) and (16) are obtained by noticing that the expected change in the frequency of cooperators ofgenotype u , E [∆ p u ], is a positive multiple of b u − P ℓv =1 b v p v − k (cid:16) c u − P ℓv =1 c v p v (cid:17) in the death-birth processand of P ℓv =1 c v p v − c u in the birth-death process (see Eqs. (33) and (36) in Methods). In the birth-deathprocess, it follows that the expected change in the frequency of cooperators of genotype u is close to 0 if p u is close to 1, hence increases in cooperators who pay nonzero costs are necessarily transient.3. Discussion
Asymmetric games naturally separate standard evolutionary update rules into cultural and genetic classes.This distinction is important because it captures biological differences that are not always apparent in modelsof evolution based on symmetric games. For example, consider a model player whose offspring replaces afocal player and a model player whose strategy is imitated by a focal player. For symmetric games, processesbased on these two types of updates are mathematically identical; if asymmetry is present, then the factthat one update is genetic (replacement) and the other is cultural (imitation) becomes important. Thus,asymmetric games can highlight fundamental differences in evolutionary processes that are based on distinctupdate rules but happen to behave similarly when the underlying game is symmetric.In order to incorporate into evolutionary games the asymmetries commonly studied in classical game the-ory, our focus has been on games with asymmetric payoffs . Games with asymmetric payoffs arise naturallyfrom different forms of interaction heterogeneity. Dependence of payoffs on the environment is a reason-able assumption when considering ecological variation (Maciejewski and Puleo, 2014). Certain patches mayprovide resources or have drawbacks that influence a player’s success when using a particular strategy(Kun and Dieckmann, 2013). Asymmetric interactions may also be the result of heterogeneity in the sizes orstrengths of players (Maynard Smith and Parker, 1976; Hauser et al., 2014). Whether the source of asymme-try is the environment or the players themselves, our model effectively resolves a collection of microscopicallyasymmetric interactions with a macroscopically symmetric game in the limit of weak selection. Figs. 1 and2 illustrate this result for three common update rules.Similar forms of asymmetry have been studied previously in evolutionary game theory: Szolnoki and Szab´o(2007) consider asymmetry appearing in the update rule that results in “attractive” and “repulsive” players inthe pairwise comparison process. For games with population structures defined by two graphs (“interaction”and “dispersal” graphs), Ohtsuki et al. (2007a,b) show that the evolution of cooperation can be inhibited byasymmetry arising from differences in these two graphs. On the other hand, Pacheco et al. (2009) show thatheterogeneous population structures can promote the evolution of cooperation by effectively transforminga collection of microscopic social dilemmas into a global coordination game. This result is reminiscentof our Theorem 1, which relates the microscopic interactions to the global behavior of a process. Suchheterogeneous population structures can result in asymmetric interactions even if the underlying game issymmetric (Maciejewski et al., 2014). These models, although somewhat different from ours, demonstratethat asymmetry (in its many forms) has a remarkable effect on evolutionary dynamics.Although genotypic asymmetry can always be reduced to a (larger) symmetric game under genetic updaterules, this symmetric game can be of independent interest. For example, Eq. (16) shows that if cooperatorsvary in size or strength, then certain cooperators may increase in the Donation Game even under birth-deathupdating. In contrast, cooperation never increases in the absence of cooperator variation Ohtsuki et al.(2006). Though defectors still eventually outcompete cooperators, the transient increase in cooperatorssuggests that other evolutionary processes with this form of asymmetry can behave in novel ways. f both ecological and genotypic asymmetries are present, they can be handled separately: genotypicasymmetry is reduced to either (i) ecological asymmetry (if the update rule is cultural) or (ii) a symmetricgame with more strategies (if the update rule is genetic). In either case, an evolutionary game with bothecological and genotypic asymmetries can be reduced to a game with ecological asymmetry only and henceTheorem 1 applies. Our framework handles asymmetry resulting from varying baseline traits due to both environment and genotype, which could be referred to as phenotypic asymmetry.The presence of ecological or genotypic asymmetry in an evolutionary process does not necessarily dependon the selection strength or update rule; these forms of asymmetry may be incorporated into many evolu-tionary processes. Theorem 1, which effectively reduces a game with ecological asymmetry to a particularsymmetric game, is stated for four common update rules in evolutionary game theory. Fig. 3 demonstrates(using the asymmetric Snowdrift Game) that this theorem is specific to weak selection. That selection isweak is often a reasonable assumption when using evolutionary games to study populations of organismswith many traits. However, our study of the asymmetric Snowdrift Game for stronger selection strengthssuggests that the behavior of asymmetric games is more complicated if selection is strong. Though moredifficult to treat analytically, symmetric games under strong selection are worthy of further investigation.Asymmetry is omnipresent in nature, and any framework that is used to model evolution should take intoaccount possible sources of asymmetry. We have formally introduced ecological and genotypic asymmetriesinto evolutionary game theory and have studied these asymmetries in the limit of weak selection. Asymmetryhas a natural place in the Donation Game and the Snowdrift Game, but our results are applicable to anygeneral n -strategy matrix game. Our treatment of asymmetry highlights important differences betweenmodels of cultural and genetic evolution that are not apparent in the traditional setting of symmetric games.Ecological and genotypic asymmetries cover a wide variety of background variation observed in biologicalpopulations, and, as such, our framework enhances the modeling capacity of evolutionary games.4. Methods
For the two genetic processes (death-birth and birth-death) and the two cultural processes (imitationand pairwise comparison) we consider, we treat ecologically asymmetric games on a large, regular networkusing pair approximation (Matsuda et al., 1992; Ohtsuki et al., 2006). We assume here that the degree ofthe network, k , is at least 3. For k = 2, the network is just a cycle, and we do not treat this case here. Thedetailed steps of each calculation are omitted but we include the main setups to allow for reconstructionof the reported results. We begin by recalling the way in which these four processes are defined (see eg.Ohtsuki and Nowak (2006)):(DB) In the death-birth process, a player is selected uniformly at random from the population for death.A neighbor of the focal individual is then selected to reproduce with probability proportional torelative fitness, and the resulting offspring replaces the deceased player;(BD) In the birth-death process, an individual is selected from the population for reproduction withprobability proportional to relative fitness, and the offspring replaces a neighbor at random;(IM) In the imitation process, an individual is chosen uniformly at random to evaluate his or her strategy.This focal individual either adopts a strategy of a neighbor (with probability proportional to thatneighbor’s relative fitness) or retains his or her original strategy (with probability proportional toown relative fitness);(PC) In the pairwise comparison process, a focal individual is selected uniformly at random from thepopulation to evaluate his or her strategy. A model individual is then chosen uniformly at randomfrom the neighbors of the focal individual as a basis for comparison, and the focal player adopts thestrategy of the model player with probability proportional to the model player’s relative fitness.4.1. Notation and general remarks.
Let S = { A , . . . , A n } be the set of pure strategies available to eachplayer and suppose that there are N players on a regular network of size N (i.e. every node is occupied).A strategy pair ( A r , A s ) means a choice of a player using strategy A r who has as a neighbor a player using trategy A s . Let p r := frequency of players using strategy A r ; (17a) p rs := frequency of strategy pairs ( A r , A s ); (17b) q s | r := conditional probability of finding an s player next to an r player . (17c)We will make repeated use of the following properties of these quantities: n X r =1 p r = n X s =1 q s | r = 1; (18a) p s q r | s = p rs = p sr = p r q s | r . (18b)Strictly speaking, the equalities p s q r | s = p rs = p sr = p r q s | r need not hold in general. As a pathologicalexample, one may consider the network with two nodes and a single undirected link between these nodes.If the player on the first node uses A r , the player on the second node uses A s , and r = s , then p rs = 1 but p s = 1 /
2, which gives q r | s = 2. However, for large random regular graphs (Bollob´as, 2001), condition (18b)holds approximately, and we will take this equality as given in what follows.For X ∈ (cid:8) p r , p rs , q s | r (cid:9) r,s n , let E [∆ X ] denote the expected change in X in one step of the process.A pair ( A r , i ) denotes a player on vertex i using strategy A r . Given pairs ( A r , i ) and ( A s , j ), we denoteby π ( A s ,j ) ( A r , i ) the expected payoff to a player at vertex j playing strategy A s given that they have as aneighbor an individual playing strategy A r at vertex i . If β > π , is converted to fitness, f β ( π ), via f β ( π ) := exp n βπ o . (19)When defined in this way, fitness is always positive.The main theorem we prove is the following: Theorem 1.
In the limit of weak selection, the dynamics of the ecologically asymmetric death-birth, birth-death, imitation, and pairwise comparison processes on a large, regular network may be approximated bythe dynamics of a symmetric game with the same update rule and payoff matrix M := kN P Ni,j =1 w ij M ij ,i.e. M = A A · · · A n A a , a a , a · · · a n , a n A a , a a , a · · · a n , a n ... ... ... . . . ... A n a n , a n a n , a n · · · a nn , a nn , (20)where a st := kN P Ni,j =1 w ij a ijst for each s and t .Theorem 1 is established for each of these four update rules separately:4.2. Death-birth updating.
If an individual is playing strategy A r at node i , A s at j , and if w ij = 0, then π ( A s ,j ) ( A r , i ) = a jisr + X m = i w jm n X t =1 a jmst q t | s . (21)Suppose that an ( A r , i ) individual is selected for death. The probability that ( A s , j ) replaces this focalindividual is proportional to f β (cid:0) π ( A s ,j ) ( A r , i ) (cid:1) . For each i , let ( i , . . . , i k ) be an enumeration of the indices j with w ij = 0 (say, in increasing order) and let s ℓ be the strategy used by the player at vertex i ℓ . If ( A r , i )is chosen for death, then the probability that it is replaced by ( A s ℓ , i ℓ ) is f β (cid:16) π ( A sℓ ,i ℓ ) ( A r , i ) (cid:17)P kj =1 f β (cid:18) π (cid:16) A sij ,i j (cid:17) ( A r , i ) (cid:19) . (22) he Taylor expansion of this term for small β is f β (cid:16) π ( A sℓ ,i ℓ ) ( A r , i ) (cid:17)P kj =1 f β (cid:18) π (cid:16) A sij ,i j (cid:17) ( A r , i ) (cid:19) = 1 k + β kπ ( A s ,i ℓ ) ( A r , i ) − P kj =1 π (cid:16) A sij ,i j (cid:17) ( A r , i ) k + O (cid:0) β (cid:1) . (23)This expansion will be used frequently in the displays that follow.4.2.1. Approximation of the expected change in strategy frequencies.
Let δ x,y be the Kronecker delta (definedto be 1 if x = y and 0 otherwise). The probability of choosing the player on vertex i for death is 1 /N . Thechance that this player is using strategy A h is p h . Suppose that (cid:16) A s i , . . . , A s ik (cid:17) is a k -tuple of strategies. Ifthe focal player at vertex i uses strategy A h , then the probability that the player on vertex i ℓ uses strategy A s iℓ for each ℓ = 1 , . . . , k is q s i | h · · · q s ik | h . Thus, E [∆ p r ] = 1 N N X i =1 X h = r p h n X s i ,...,s ik =1 q s i | h · · · q s ik | h k X ℓ =1 δ s iℓ ,r f β (cid:0) π ( A r ,i ℓ ) ( A h , i ) (cid:1)P kj =1 f β (cid:18) π (cid:16) A sij ,i j (cid:17) ( A h , i ) (cid:19) (cid:18) N (cid:19) + 1 N N X i =1 p r n X s i ,...,s ik =1 q s i | r · · · q s ik | r X h = r k X ℓ =1 δ s iℓ ,h f β (cid:0) π ( A h ,i ℓ ) ( A r , i ) (cid:1)P kj =1 f β (cid:18) π (cid:16) A sij ,i j (cid:17) ( A r , i ) (cid:19) (cid:18) − N (cid:19) (24)for each strategy, A r . The Taylor expansion to first-order yields E [∆ p r ] ≈ β (cid:18) ( k − p r k N (cid:19) (cid:16) (A) − (B) − (C) + (D) (cid:17) + O (cid:0) β (cid:1) , (25)where (A) = X h = r q h | r N X i =1 k X ℓ =1 n X s iℓ =1 q s iℓ | r π (cid:16) A siℓ ,i ℓ (cid:17) ( A r , i ) ; (26a)(B) = X h = r q h | r N X i =1 k X ℓ =1 n X s iℓ =1 q s iℓ | h π (cid:16) A siℓ ,i ℓ (cid:17) ( A h , i ) ; (26b)(C) = X h = r q h | r N X i =1 k X ℓ =1 π ( A h ,i ℓ ) ( A r , i ) ; (26c)(D) = X h = r q h | r N X i =1 k X ℓ =1 π ( A r ,i ℓ ) ( A h , i ) . (26d) .2.2. Approximation of the expected change in pair frequencies. If r = s , then E [∆ p rs ] = 1 N N X i =1 X h = r,s p h n X s i ,...,s ik =1 q s i | h · · · q s ik | h × k X ℓ =1 δ s iℓ ,r f β (cid:0) π ( A r ,i ℓ ) ( A h , i ) (cid:1)P kj =1 f β (cid:18) π (cid:16) A sij ,i j (cid:17) ( A h , i ) (cid:19) P kα =1 δ s iα ,s kN ! + 1 N N X i =1 X h = r,s p h n X s i ,...,s ik =1 q s i | h · · · q s ik | h × k X ℓ =1 δ s iℓ ,s f β (cid:0) π ( A s ,i ℓ ) ( A h , i ) (cid:1)P kj =1 f β (cid:18) π (cid:16) A sij ,i j (cid:17) ( A h , i ) (cid:19) P kα =1 δ s iα ,r kN ! + 1 N N X i =1 p r n X s i ,...,s ik =1 q s i | r · · · q s ik | r × k X ℓ =1 δ s iℓ ,s f β (cid:0) π ( A s ,i ℓ ) ( A r , i ) (cid:1)P kj =1 f β (cid:18) π (cid:16) A sij ,i j (cid:17) ( A r , i ) (cid:19) P kα =1 (cid:0) δ s iα ,r − δ s iα ,s (cid:1) kN ! + 1 N N X i =1 p r n X s i ,...,s ik =1 q s i | r · · · q s ik | r × X h = r,s k X ℓ =1 δ s iℓ ,h f β (cid:0) π ( A h ,i ℓ ) ( A r , i ) (cid:1)P kj =1 f β (cid:18) π (cid:16) A sij ,i j (cid:17) ( A r , i ) (cid:19) − P kα =1 δ s iα ,s kN ! + 1 N N X i =1 p s n X s i ,...,s ik =1 q s i | s · · · q s ik | s × k X ℓ =1 δ s iℓ ,r f β (cid:0) π ( A r ,i ℓ ) ( A s , i ) (cid:1)P kj =1 f β (cid:18) π (cid:16) A sij ,i j (cid:17) ( A s , i ) (cid:19) P kα =1 (cid:0) δ s iα ,s − δ s iα ,r (cid:1) kN ! + 1 N N X i =1 p s n X s i ,...,s ik =1 q s i | s · · · q s ik | s × X h = r,s k X ℓ =1 δ s iℓ ,h f β (cid:0) π ( A h ,i ℓ ) ( A s , i ) (cid:1)P kj =1 f β (cid:18) π (cid:16) A sij ,i j (cid:17) ( A s , i ) (cid:19) − P kα =1 δ s iα ,r kN ! . (27) n the other hand, E [∆ p rr ] = 1 N n X i =1 X h = r p h n X s i ,...,s ik =1 q s i | h · · · q s ik | h × k X ℓ =1 δ s iℓ ,r f β (cid:0) π ( A r ,i ℓ ) ( A h , i ) (cid:1)P kj =1 f β (cid:18) π (cid:16) A sij ,i j (cid:17) ( A h , i ) (cid:19) P kα =1 δ s iα ,r kN ! + 1 N n X i =1 p r n X s i ,...,s ik =1 q s i | r · · · q s ik | r × X h = r k X ℓ =1 δ s iℓ ,h f β (cid:0) π ( A h ,i ℓ ) ( A r , i ) (cid:1)P kj =1 f β (cid:18) π (cid:16) A sij ,i j (cid:17) ( A r , i ) (cid:19) − P kα =1 δ s iα ,r kN ! . (28)The zeroth-order Taylor expansion yields E [∆ p rs ] ≈ p r kN − kq s | r + ( k − n X h =1 q s | h q h | r ! + O ( β ) (29)if r = s , and E [∆ p rr ] ≈ p r kN − kq r | r + ( k − k X h =1 q r | h q h | r ! + O ( β ) . (30)Therefore, E [∆ p r ] = O ( β ) (by Eq. (25)) and E [∆ p rs ] = O (1) (by Eqs. (29) and (30)) for each r and s ,which results in a separation of timescales between the strategy frequencies and the pair frequencies. Inparticular, the pair frequencies will reach their equilibrium much more quickly than the strategy frequencieswill, so we can examine the expression for E [∆ p r ] under the assumption that the pair frequencies havereached their equilibrium (Ohtsuki et al., 2006).4.2.3. Weak-selection dynamics.
Assuming that each update takes place in one unit of time, we can ap-proximate the dynamics by the deterministic systems ˙ p r = E [∆ p r ] and ˙ p rs = E [∆ p rs ] for each r and s (Ohtsuki et al., 2006; Ohtsuki and Nowak, 2006). Since β is small, we see that the latter system willreach equilibrium much quicker than the former. When the pair frequencies have reached equilibrium (i.e. E [∆ p rs ] = 0), we have kq s | r = δ s,r + ( k − n X h =1 q s | h q h | r . (31)Ohtsuki and Nowak (2006) show that this equation implies that q r | s = p r + (cid:18) k − (cid:19) ( δ s,r − p r ) . (32) ssuming the system has reached this local equilibrium, we then have E [∆ p r ] ≈ β (cid:18) ( k − p r k N (cid:19) ( k + 1) N X i,j =1 w ij n X s =1 a ijrs q s | r − k N X i,j =1 w ij n X s,t =1 a ijst q t | s q s | r − N X i,j =1 w ij n X s,t =1 a ijst q s | t q t | r ! + O (cid:0) β (cid:1) = β (cid:18) ( k − p r kN (cid:19) ( k + 1) n X s =1 a rs q s | r − k n X s,t =1 a st q t | s q s | r − n X s,t =1 a st q s | t q t | r ! + O (cid:0) β (cid:1) = β (cid:18) ( k − p r k ( k − N (cid:19) − ( k −
2) ( k + 1) n X s,t =1 a st p s p t + (cid:0) k − k − (cid:1) n X s =1 a rs p s − n X s =1 a sr p s − ( k + 1) n X s =1 a ss p s + ( k + 1) a rr ! + O (cid:0) β (cid:1) (33)as long as β is small. Therefore, if we choose an appropriate time scale and set˙ p r = E [∆ p r ]∆ t ; (34a) b rs = a rr + a rs − a sr − a ss k − φ = n X s,t =1 p s p t a st , (34c)then ˙ p r = p r (cid:0)P ns =1 p s (cid:0) a rs + b rs (cid:1) − φ (cid:1) , recovering the replicator equation of Ohtsuki and Nowak (2006). Itfollows that the dynamics depend on M , proving Theorem 1 for death-birth updating.4.3. Birth-death updating.
In the birth-death process, an individual is selected for reproduction withprobability proportional to relative fitness. The offspring of the selected player then replaces a randomneighbor. Rather than trying to approximate the total fitness of the population, we will simply denote thisvalue by f pop . Since this value is positive, it does not influence the sign of the expectation values and assuch we will largely ignore it. We have E [∆ p r ] = 1 f pop N p r (cid:18) N (cid:19) N X i =1 n X s i ,...,s ik =1 q s i | r · · · q s ik | r f β k X ℓ =1 a ii ℓ rs iℓ ! X h = r P kj =1 δ s ij ,h k ! (cid:18) N (cid:19) + 1 f pop X h = r N p h (cid:18) N (cid:19) N X i =1 n X s i ,...,s ik =1 q s i | h · · · q s ik | h f β k X ℓ =1 a ii ℓ hs iℓ ! P kj =1 δ s ij ,r k ! (cid:18) − N (cid:19) . (35)The local equilibrium conditions for birth-death updating turn out to be the same as those for death-birthupdating (Eq. (32)). These local equilibrium conditions do not take into account selection as long as β isclose to 0, so they are essentially based on a neutral process in which at most one strategy is update at eachtime step. Therefore, it is perhaps not surprising that these conditions are the same for different processesbased on one strategy update in each time step. n the following expressions, by x ∝ y we mean that x is proportional to y with positive constant ofproportionality. Letting β → § E [∆ p r ] ∝ βp r k N X i,j =1 w ij n X s =1 a ijrs q s | r − ( k − N X i,j =1 w ij n X s,t =1 a ijst q t | s q s | r − N X i,j =1 w ij n X s =1 a ijsr q s | r + O (cid:0) β (cid:1) ∝ βp r k n X s =1 a rs q s | r − ( k − n X s,t =1 a st q t | s q s | r − n X s =1 a sr q s | r ! + O (cid:0) β (cid:1) ∝ βp r − ( k − n X s,t =1 a st p s p t + ( k − n X s =1 a rs p s − n X s =1 a sr p s − n X s =1 a ss p s + a rr ! + O (cid:0) β (cid:1) . (36)Just as we saw with the death-birth process, after choosing an appropriate time scale and letting b rs = ( k + 1) a rr + a rs − a sr − ( k + 1) a ss ( k −
2) ( k + 1) ; (37a) φ = n X s,t =1 p s p t a st , (37b)we have ˙ p r = p r (cid:0)P ns =1 p s (cid:0) a rs + b rs (cid:1) − φ (cid:1) , proving Theorem 1 for birth-death updating.4.4. Imitation updating.
In the imitation process, an individual is selected uniformly at random fromthe population to evaluate his strategy. The chosen player then compares his fitness with the fitness ofeach neighbor and either adopts a new strategy or retains his or her current strategy (with probabilityproportional to relative fitness). Suppose that an individual at vertex i , playing A r , is selected to evaluatehis or her strategy. If s = r , then the probability that he or she adopts strategy s is P kℓ =1 δ s ℓ ,s f β (cid:16) π ( A sℓ ,i ℓ ) ( A r , i ) (cid:17)P kj =1 f β (cid:18) π (cid:16) A sij ,i j (cid:17) ( A r , i ) (cid:19) + f β (cid:16)P kj =1 a ii j rs ij (cid:17) (38)and the probability that his strategy remains unchanged is P kℓ =1 δ s ℓ ,r f β (cid:16) π ( A sℓ ,i ℓ ) ( A r , i ) (cid:17) + f β (cid:16)P kj =1 a ii j rs ij (cid:17)P kj =1 f β (cid:18) π (cid:16) A sij ,i j (cid:17) ( A r , i ) (cid:19) + f β (cid:16)P kj =1 a ii j rs ij (cid:17) . (39)We let π ( A s ,j ) ( A r , i ) be the same as it was for death-birth updating. For small β , f β (cid:16) π ( A sℓ ,i ℓ ) ( A r , i ) (cid:17)P kj =1 f β (cid:18) π (cid:16) A sij ,i j (cid:17) ( A r , i ) (cid:19) + f β (cid:16)P kj =1 a ii j rs ij (cid:17) ≈ k + 1 + β ( k + 1) π ( A sℓ ,i ℓ ) ( A r , i ) − P kj =1 π (cid:16) A sij ,i j (cid:17) ( A r , i ) − P kj =1 a ii j rs ij ( k + 1) + O (cid:0) β (cid:1) . (40) .4.1. Approximation of the expected change in strategy frequencies.
For r ∈ { , . . . , n } , E [∆ p r ] = 1 N N X i =1 X h = r p h n X s i ,...,s ik =1 q s i | h · · · q s ik | h × k X ℓ =1 δ s iℓ ,r f β (cid:0) π ( A r ,i ℓ ) ( A h , i ) (cid:1)P kj =1 f β (cid:18) π (cid:16) A sij ,i j (cid:17) ( A h , i ) (cid:19) + f β (cid:16)P kj =1 a ii j hs ij (cid:17) (cid:18) N (cid:19) + 1 N N X i =1 p r n X s i ,...,s ik =1 q s i | r · · · q s ik | r × X h = r k X ℓ =1 δ s iℓ ,h f β (cid:0) π ( A h ,i ℓ ) ( A r , i ) (cid:1)P kj =1 f β (cid:18) π (cid:16) A sij ,i j (cid:17) ( A r , i ) (cid:19) + f β (cid:16)P kj =1 a ii j rs ij (cid:17) (cid:18) − N (cid:19) . (41)The local equilibrium conditions are exactly the same as they were for the death-birth process. Assumingthat the system has reached this local equilibrium, the separation-of-timescales argument we used in § E [∆ p r ] ≈ β p r ( k + 1) N ! (cid:0) k + 2 k − (cid:1) N X i,j =1 w ij n X s =1 a ijrs q s | r − (cid:0) k + k − (cid:1) N X i,j =1 w ij n X s,t =1 a ijst q t | s q s | r − ( k − N X i,j =1 w ij n X s,t =1 a ijts q t | s q s | r − N X i,j =1 w ij n X s =1 a ijsr q s | r ! + O (cid:0) β (cid:1) = β kp r ( k + 1) N ! (cid:0) k + 2 k − (cid:1) n X s =1 a rs q s | r − (cid:0) k + k − (cid:1) n X s,t =1 a st q t | s q s | r − ( k − n X s,t =1 a ts q t | s q s | r − n X s =1 a sr q s | r ! + O (cid:0) β (cid:1) = β k ( k − p r ( k −
1) ( k + 1) N ! − ( k −
2) ( k + 3) n X s,t =1 a st p s p t + (cid:0) k + k − (cid:1) n X s =1 a rs p s − n X s =1 a sr p s − ( k + 3) n X s =1 a ss p s + ( k + 3) a rr ! + O (cid:0) β (cid:1) . (42)With b rs = ( k +3) a rr +3 a rs − a sr − ( k +3) a ss ( k − k +3) and φ = P ns,t =1 p s p t a st , we have˙ p r = p r n X s =1 p s (cid:0) a rs + b rs (cid:1) − φ ! , (43)which establishes Theorem 1 for imitation updating.4.5. Pairwise comparison updating.
In the pairwise comparison process, a focal individual is selecteduniformly at random from the population. A model individual is then chosen uniformly at random fromthe neighbors of the focal individual. If π f and π m denote the payoffs to the focal and model individuals, espectively, then the focal player will adopt the strategy of the model player with probability11 + e β ( π f − π m ) = f β ( π m ) f β ( π m ) + f β ( π f ) , (44)where β > π ( A s ,j ) ( A r , i ) (defined in the same way as for death-birth updating), we let π ( A s ,i ) := k X j =1 a ii j ss ij (45)if ( A s , i ) has as a neighborhood (cid:16) A s i , . . . , A s ik (cid:17) . With this notation in place, we have E [∆ p r ] = 1 N N X i =1 X h = r p h n X s i ,...,s ik =1 q s i | h · · · q s ik | h × k X ℓ =1 (cid:18) k (cid:19) δ s iℓ ,r f β (cid:0) π ( A r ,i ℓ ) ( A h , i ) (cid:1) f β (cid:0) π ( A r ,i ℓ ) ( A h , i ) (cid:1) + f β (cid:0) π ( A h ,i ) (cid:1) ! (cid:18) N (cid:19) + 1 N N X i =1 p r n X s i ,...,s ik =1 q s i | r · · · q s ik | r × X h = r k X ℓ =1 (cid:18) k (cid:19) δ s iℓ ,h f β (cid:0) π ( A h ,i ℓ ) ( A r , i ) (cid:1) f β (cid:0) π ( A h ,i ℓ ) ( A r , i ) (cid:1) + f β (cid:0) π ( A r ,i ) (cid:1) ! (cid:18) − N (cid:19) . (46)As β →
0, we have f β ( x ) f β ( x ) + f β ( y ) ≈
12 + β (cid:18) x − y (cid:19) + O (cid:0) β (cid:1) . (47)Consequently, in the limit of weak selection, E [∆ p r ] ≈ β p r kN k N X i,j =1 w ij n X s =1 a ijrs q s | r − ( k − N X i,j =1 w ij n X s,t =1 a ijst q t | s q s | r − N X i,j =1 w ij n X s =1 a ijsr q s | r + O (cid:0) β (cid:1) = β p r N k n X s =1 a rs q s | r − ( k − n X s,t =1 a st q t | s q s | r − n X s =1 a sr q s | r ! + O (cid:0) β (cid:1) = β (cid:18) ( k − p r k − N (cid:19) − ( k − n X s,t =1 a st p s p t + ( k − n X s =1 a rs p s − n X s =1 a sr p s − n X s =1 a ss p s + a rr ! + O (cid:0) β (cid:1) . (48)The local equilibrium conditions are exactly the same as they were for the other processes, but in thiscase they are not needed to arrive at this last expression for E [∆ p r ]. With b rs = a rr + a rs − a sr − a ss k − and φ = P ns,t =1 p s p t a st , we have ˙ p r = p r (cid:0)P ns =1 p s (cid:0) a rs + b rs (cid:1) − φ (cid:1) . It follows that the dynamics of the pairwisecomparison process depend on M , which completes the proof of Theorem 1.Finally, we show that the dynamics of each process are independent of the particular network configurationif the asymmetric game is spatially additive : Definition 1. If a ijrs = x irs + y jrs for each r and s , then M ij is called a spatially additive payoff matrix. If M ij is spatially additive for each i and j , then the game is said to be spatially additive. Corollary 1. If M ij is spatially additive for each i and j , then the expected change in the frequency ofstrategy A r , E [∆ p r ], is independent of ( w ij ) i,j N for each r . In particular, the dynamics of the processdo not depend on the particular network configuration. roof. If a ijrs = x irs + y jrs for each r, s, i, j , then a st = 1 kN N X i,j =1 w ij a ijst = 1 N N X i =1 x irs + 1 N N X j =1 y jrs , (49)which is independent of ( w ij ) i,j N . The corollary then follows directly from Theorem 1. (cid:3) Computer simulations.
In each simulation, a random k -regular network (with k = 3) of N = 500vertices is generated. The selection intensity is β = 0 .
01 for Figs. 1 and 2, β = 0 . β = 0 . d , uniformly at randomfrom the interval [0 , d (resp. 1 − d ). The update rule is applied until either C or D fixates. (The absorption time depends on anumber of factors including the game, selection strength, and initial configuration of the population.) Let p C ( t ) denote the frequency of cooperators at time t ; p C (0) is just the initial frequency of cooperators. Thefrequency p C ( t + 1) is obtained from p C ( t ) by adding to it the change in the frequency of cooperators overthe next N (= 500) updates. For each t , the quantity p C ( t + 1) − p C ( t ) is associated with p C ( t ). Once p C ∈ { , } , a new initial configuration of cooperators is chosen and the process is repeated. After eachpossible value of p C has at least 10 associated data points (changes in cooperator frequency), these changesare averaged, and this resulting quantity, ∆ p C , is paired with the corresponding value of p C . These pairsare then plotted to obtain Figs. 1, 2, and 3. The results from pair approximation apply to the expectedchange over one update, but we can easily get a predicted result over N updates (i.e. one Monte Carlo step)by scaling the expressions for E [∆ p C ] by a factor of N .Small deviations from the expected results are seen in each of the figures, and these deviations are dueto the effects of finite selection parameter ( β ) and the finiteness of the set of possible values of p C (∆ p C isa multiple of 1 /N ). As an example of how these properties can give rise to small deviations, consider theDonation Game under imitation updating in Fig. 1(A). Eq. (42) predicts that E [∆ p C ] is always positive,yet we observe in Fig. 1(A) that this change becomes negative as p C → ,
1. If p C = ( N − /N and β > f ( j ) β denotethe fitness of the player at location j . Thus, with just a single defector (at location i ) in a population ofcooperators, we have f ( i ) β > f ( j ) β for each j = i , with equality if and only if β = 0. The expected change inthe frequency of cooperators in the next time step is E [∆ p C ] = (cid:18) N (cid:19) (cid:18) N (cid:19) − f ( i ) β f ( i ) β + P { j : w ij =1 } f ( j ) β − (cid:18) N (cid:19) X { j : w ij =1 } (cid:18) N (cid:19) f ( i ) β f ( j ) β + P { l : w jl =1 } f ( l ) β . (50)The first (resp. second) summation runs over all of the neighbors of i (resp. j ). For each j = i , f ( i ) β f ( i ) β + P { j : w ij =1 } f ( j ) β > k + 1 ; (51a) f ( i ) β f ( j ) β + P { l : w jl =1 } f ( l ) β > k + 1 , (51b)both with equality if and only if β = 0. Therefore, we see that E [∆ p C ] (cid:18) N (cid:19) (cid:18) N (cid:19) (cid:18) − k + 1 (cid:19) − (cid:18) N (cid:19) (cid:18) kN (cid:19) (cid:18) k + 1 (cid:19) = 0 (52)with equality if and only if β = 0. The same argument explains the negative average changes as p C → p C can only take on finitely many values for a given population size, similar arguments explain thesmall discrepancies between the actual and expected results for intermediate values of p C (see Fig. 1). cknowledgments A. M. thanks Farhan Abedin and Gy¨orgy Szab´o for helpful discussions. A. M. and C. H. acknowledgefinancial support from the Natural Sciences and Engineering Research Council of Canada (NSERC) and C.H. from the Foundational Questions in Evolutionary Biology Fund (FQEB), grant RFP-12-10.
References
H. Ohtsuki, C. Hauert, E. Lieberman, and M. A. Nowak. A simple rule for the evolution of cooperation ongraphs and social networks.
Nature , 441(7092):502–505, May 2006. doi: 10.1038/nature04605.M. A. Nowak. Five rules for the evolution of cooperation.
Science , 314(5805):1560–1563, Dec 2006a. doi:10.1126/science.1133755.P. D. Taylor, T. Day, and G. Wild. Evolution of cooperation in a finite homogeneous graph.
Nature , 447(7143):469–472, May 2007. doi: 10.1038/nature05784.J. Maynard Smith.
Evolution and the Theory of Games . Cambridge University Press, 1982. doi: 10.1017/cbo9780511806292.J. Hofbauer and K. Sigmund.
Evolutionary Games and Population Dynamics . Cambridge University Press,1998. doi: 10.1017/cbo9781139173179.R. M. Dawes. Social dilemmas.
Annual Review of Psychology , 31(1):169–193, Jan 1980. doi: 10.1146/annurev.ps.31.020180.001125.C. Hauert, F. Michor, M. A. Nowak, and M. Doebeli. Synergy and discounting of cooperation in socialdilemmas.
Journal of Theoretical Biology , 239(2):195–202, Mar 2006. doi: 10.1016/j.jtbi.2005.08.040.C. Hauert and M. Doebeli. Spatial structure often inhibits the evolution of cooperation in the snowdriftgame.
Nature , 428(6983):643–646, Apr 2004. doi: 10.1038/nature02360.M. Doebeli and C. Hauert. Models of cooperation based on the prisoner’s dilemma and the snowdrift game.
Ecology Letters , 8(7):748–766, Jul 2005. doi: 10.1111/j.1461-0248.2005.00773.x.B. Voelkl. The ‘hawk-dove’ game and the speed of the evolutionary process in small heterogeneous popula-tions.
Games , 1(2):103–116, May 2010. doi: 10.3390/g1020103.R. Dawkins.
The Selfish Gene . Oxford University Press, 1976.P. Schuster and K. Sigmund. Coyness, philandering and stable strategies.
Animal Behaviour , 29(1):186–192,Feb 1981. doi: 10.1016/s0003-3472(81)80165-0.J. Maynard Smith and J. Hofbauer. The “battle of the sexes”: A genetic model with limit cycle behavior.
Theoretical Population Biology , 32(1):1–14, Aug 1987. doi: 10.1016/0040-5809(87)90035-9.J. Hofbauer. Evolutionary dynamics for bimatrix games: A hamiltonian system?
Journal of MathematicalBiology , 34(5-6):675–688, May 1996. doi: 10.1007/bf02409754.L. A. Dugatkin. Winner and loser effects and the structure of dominance hierarchies.
Behavioral Ecology , 8(6):583–587, 1997. doi: 10.1093/beheco/8.6.583.W. G. Wright and A. L. Shanks. Previous experience determines territorial behavior in an archaeogastropodlimpet.
Journal of Experimental Marine Biology and Ecology , 166(2):217–229, Mar 1993. doi: 10.1016/0022-0981(93)90220-i.A. L. Shanks. Previous agonistic experience determines both foraging behavior and territoriality in the limpetlottia gigantea (sowerby).
Behavioral Ecology , 13(4):467–471, Jul 2002. doi: 10.1093/beheco/13.4.467.R. Selten. A note on evolutionarily stable strategies in asymmetric animal conflicts.
Journal of TheoreticalBiology , 84(1):93–101, May 1980. doi: 10.1016/s0022-5193(80)81038-1.P. Hammerstein. The role of asymmetries in animal contests.
Animal Behaviour , 29(1):193–205, Feb 1981.doi: 10.1016/s0003-3472(81)80166-2.H. Ohtsuki. Stochastic evolutionary dynamics of bimatrix games.
Journal of Theoretical Biology , 264(1):136–142, May 2010. doi: 10.1016/j.jtbi.2010.01.016.J. A. R. Marshall. The donation game with roles played between relatives.
Journal of Theoretical Biology ,260(3):386–391, Oct 2009. doi: 10.1016/j.jtbi.2009.07.008.J. Weiner. Asymmetric competition in plant populations.
Trends in Ecology & Evolution , 5(11):360–364,Nov 1990. doi: 10.1016/0169-5347(90)90095-u.R. P. Freckleton and A. R. Watkinson. Asymmetric competition between plant species.
Functional Ecology ,15(5):615–623, Oct 2001. doi: 10.1046/j.0269-8463.2001.00558.x. . Doebeli and I. Ispolatov. Symmetric competition as a general model for single-species adaptive dynamics. Journal of Mathematical Biology , 67(2):169–184, May 2012. doi: 10.1007/s00285-012-0547-4.K. Sigmund.
The calculus of selfishness . Princeton University Press, 2010.M. Bergman, M. Olofsson, and C. Wiklund. Contest outcome in a territorial butterfly: the role of motivation.
Proceedings of the Royal Society B: Biological Sciences , 277(1696):3027–3033, May 2010. doi: 10.1098/rspb.2010.0646.J. Hofbauer and K. Sigmund. Evolutionary game dynamics.
Bulletin of the American Mathematical Society ,40(04):479–520, Jul 2003. doi: 10.1090/s0273-0979-03-00988-1.D. Fudenberg and J. Tirole.
Game Theory . The MIT Press, 1991.A. E. Magurran and M. A. Nowak. Another battle of the sexes: The consequences of sexual asymmetryin mating costs and predation risk in the guppy, poecilia reticulata.
Proceedings of the Royal Society B:Biological Sciences , 246(1315):31–38, Oct 1991. doi: 10.1098/rspb.1991.0121.M. Mesterton-Gibbons. Ecotypic variation in the asymmetric hawk-dove game: When is bourgeois anevolutionarily stable strategy?
Evolutionary Ecology , 6(3):198–222, May 1992. doi: 10.1007/bf02214162.L. A. Dugatkin.
Game Theory and Animal Behavior . Oxford University Press, 2000.P. D. Taylor and L. B. Jonker. Evolutionary stable strategies and game dynamics.
Mathematical Biosciences ,40(1-2):145–156, jul 1978. doi: 10.1016/0025-5564(78)90077-9.M. A. Nowak, A. Sasaki, C. Taylor, and D. Fudenberg. Emergence of cooperation and evolutionary stabilityin finite populations.
Nature , 428(6983):646–650, Apr 2004. doi: 10.1038/nature02414.C. Taylor, D. Fudenberg, A. Sasaki, and M. A. Nowak. Evolutionary game dynamics in finite populations.
Bulletin of Mathematical Biology , 66(6):1621–1644, Nov 2004. doi: 10.1016/j.bulm.2004.03.004.E. Lieberman, C. Hauert, and M. A. Nowak. Evolutionary dynamics on graphs.
Nature , 433(7023):312–316,Jan 2005. doi: 10.1038/nature03204.H. Ohtsuki and M. A. Nowak. The replicator equation on graphs.
Journal of Theoretical Biology , 243(1):86–97, Nov 2006. doi: 10.1016/j.jtbi.2006.06.004.G. Szab´o and G. F´ath. Evolutionary games on graphs.
Physics Reports , 446(4-6):97–216, Jul 2007. doi:10.1016/j.physrep.2007.04.004.F. D´ebarre, C. Hauert, and M. Doebeli. Social evolution in structured populations.
Nature Communications ,5, Mar 2014. doi: 10.1038/ncomms4409.M. A. Nowak.
Evolutionary Dynamics: Exploring the Equations of Life . Belknap Press, 2006b.P. A. P. Moran. Random processes in genetics.
Mathematical Proceedings of the Cambridge PhilosophicalSociety , 54(01):60, Jan 1958. doi: 10.1017/s0305004100033193.L. A. Imhof and M. A. Nowak. Evolutionary game dynamics in a wright-fisher process.
Journal of Mathe-matical Biology , 52(5):667–681, Feb 2006. doi: 10.1007/s00285-005-0369-8.G. Szab´o and C. T˝oke. Evolutionary prisoner’s dilemma game on a square lattice.
Physical Review E , 58(1):69–73, Jul 1998. doi: 10.1103/physreve.58.69.A. Traulsen, J. M. Pacheco, and M. A. Nowak. Pairwise comparison and selection temperature in evolutionarygame dynamics.
Journal of Theoretical Biology , 246(3):522–529, Jun 2007. doi: 10.1016/j.jtbi.2007.01.002.G. Ellison. Learning, local interaction, and coordination.
Econometrica , 61(5):1047, Sep 1993. doi: 10.2307/2951493.M. Mahner and M. Kary. What exactly are genomes, genotypes and phenotypes? and what about phenomes?
Journal of Theoretical Biology , 186(1):55–63, May 1997. doi: 10.1006/jtbi.1996.0335.T. M. Baye, T. Abebe, and R. A. Wilke. Genotype–environment interactions and their translational impli-cations.
Personalized Medicine , 8(1):59–70, Jan 2011. doi: 10.2217/pme.10.75.H. Matsuda, N. Ogita, A. Sasaki, and K. Sato. Statistical mechanics of population: The lattice lotka-volterramodel.
Progress of Theoretical Physics , 88(6):1035–1049, Dec 1992. doi: 10.1143/ptp/88.6.1035.B. Bollob´as.
Random Graphs . Cambridge University Press, 2001. doi: 10.1017/cbo9780511814068.J. Vukov, G. Szab´o, and A. Szolnoki. Cooperation in the noisy case: Prisoner’s dilemma game on two typesof regular random graphs.
Physical Review E , 73(6), Jun 2006. doi: 10.1103/physreve.73.067103.B. Wu, P. M. Altrock, L. Wang, and A. Traulsen. Universality of weak selection.
Physical Review E , 82(4),Oct 2010. doi: 10.1103/physreve.82.046106.C. E. Tarnita, N. Wage, and M. A. Nowak. Multiple strategies in structured populations.
Proceedings of theNational Academy of Sciences , 108(6):2334–2337, Jan 2011. doi: 10.1073/pnas.1016008108. . Wu, J. Garc´ıa, C. Hauert, and A. Traulsen. Extrapolating weak selection in evolutionary games. PLoSComputational Biology , 9(12):e1003381, Dec 2013. doi: 10.1371/journal.pcbi.1003381.M. Nowak and K. Sigmund. The evolution of stochastic strategies in the prisoner’s dilemma.
Acta ApplicandaeMathematicae , 20(3):247–265, Sep 1990. doi: 10.1007/bf00049570.W.-B. Du, X.-B. Cao, M.-B. Hu, and W.-X. Wang. Asymmetric cost in snowdrift game on scale-free networks.
Europhysics Letters , 87(6):60004, Sep 2009. doi: 10.1209/0295-5075/87/60004.W. Maciejewski and G. J. Puleo. Environmental evolutionary graph theory.
Journal of Theoretical Biology ,360:117–128, Nov 2014. doi: 10.1016/j.jtbi.2014.06.040.´A. Kun and U. Dieckmann. Resource heterogeneity can facilitate cooperation.
Nature Communications , 4,Oct 2013. doi: 10.1038/ncomms3453.J. Maynard Smith and G. A. Parker. The logic of asymmetric contests.
Animal Behaviour , 24(1):159–175,Feb 1976. doi: 10.1016/s0003-3472(76)80110-8.O. P. Hauser, A. Traulsen, and M. A. Nowak. Heterogeneity in background fitness acts as a suppressor ofselection.
Journal of Theoretical Biology , 343:178–185, Feb 2014. doi: 10.1016/j.jtbi.2013.10.013.A. Szolnoki and G. Szab´o. Cooperation enhanced by inhomogeneous activity of teaching for evolutionaryprisoner’s dilemma games.
Europhysics Letters , 77(3), Jan 2007. doi: 10.1209/0295-5075/77/30004.H. Ohtsuki, M. A. Nowak, and J. M. Pacheco. Breaking the symmetry between interaction and replacementin evolutionary dynamics on graphs.
Physical Review Letters , 98(10), Mar 2007a. doi: 10.1103/physrevlett.98.108106.H. Ohtsuki, J. M. Pacheco, and M. A. Nowak. Evolutionary graph theory: Breaking the symmetry betweeninteraction and replacement.
Journal of Theoretical Biology , 246(4):681–694, Jun 2007b. doi: 10.1016/j.jtbi.2007.01.024.J. M. Pacheco, F. L. Pinheiro, and F. C. Santos. Population structure induces a symmetry breaking favoringthe emergence of cooperation.
PLoS Computational Biology , 5(12):e1000596, Dec 2009. doi: 10.1371/journal.pcbi.1000596.W. Maciejewski, F. Fu, and C. Hauert. Evolutionary game dynamics in populations with heterogenousstructures.
PLoS Computational Biology , 10(4):e1003567, Apr 2014. doi: 10.1371/journal.pcbi.1003567., 10(4):e1003567, Apr 2014. doi: 10.1371/journal.pcbi.1003567.