Evolution of populations with strategy-dependent time delays
AAPS/123-QED
Evolution of populations with strategy-dependent time delays
Jacek Mi¸ekisz ∗ and Marek Bodnar † Institute of Applied Mathematics and Mechanics,University of Warsaw, Warsaw, Poland (Dated: February 9, 2021)We study effects of strategy-dependent time delays on equilibria of evolving populations. It iswell known that time delays may cause oscillations in dynamical systems. Here we report a novelbehavior. We show that microscopic models of evolutionary games with strategy-dependent timedelays lead to a new type of replicator dynamics. It describes the time evolution of fractions of thepopulation playing given strategies and the size of the population. Unlike in all previous models,stationary states of such dynamics depend continuously on time delays. We show that in games withan interior stationary state (a globally asymptotically stable equilibrium in the standard replicatordynamics), at certain time delays, it may disappear or there may appear another interior stationarystate. In the Prisoner’s Dilemma game, for time delays of cooperation smaller than time delays ofdefection, there appears an ustable interior equilibrium and therefore for some initial conditions,the population converges to the homogeneous state with just cooperators.
PACS numbers: Valid PACS appear here
I. INTRODUCTION
Many social and biological processes can be modeled as systems of interacting individuals within the framework ofevolutionary game theory [1–9]. The evolution of very large (infinite) populations can be then given by differentialreplicator equations which describe time changes of fractions of populations playing different strategies [3, 5, 10, 11].It is usually assumed (as in the replicator dynamics) that interactions between individuals take place instantaneouslyand their effects are immediate. In reality, all social and biological processes take a certain amount of time. Resultsof biological interactions between individuals may appear in the future, and in social models, individuals or playersmay act, that is choose appropriate strategies, on the basis of the information concerning events in the past. It isnatural therefore to introduce time delays into evolutionary game models.It is well known that time delays may cause oscillations in dynamical systems [12–15]. One usually expects thatinterior equilibria of evolving populations, describing coexisting strategies or behaviors, are asymptotically stablefor small time delays and above a critical time delay, where the Hopf bifurcation appears, they become unstable,evolutionary dynamics exhibits oscillations and cycles. Here we report a novel behavior - continuous dependence ofequilibria on time delays.Effects of time delays in replicator dynamics were discussed in [16–27] for games with an interior stable equilibrium(an evolutionarily stable strategy [1, 2]). In [16], the authors discussed the model, where individuals at time t imitatea strategy with a higher average payoff at time t − τ for some time delay τ . They showed that the interior stationarystate of the resulting time-delayed differential equation is locally asymptotically stable for small time delays and forbig ones it becomes unstable, there appear oscillations. In [17], we constructed a different type of a model, whereindividuals are born τ units of time after their parents played. Such a model leads to system of equations for thefrequency of the first strategy and the size of the population. We showed the absence of oscillations – the originalstationary point is globally asymptotically stable for any time delay. In both models, the position of the equilibriumis not affected by time delays.Here we modify the second of the above models by allowing time delays to depend on strategies played by individuals,we observe a new behavior of a dependence of equilibria on delays.Recently there were studied models with strategy-dependent time delays. In particular, Moreira et al. [21] discusseda multi-player Stag-hunt game, Ben Khalifa et al. [27] investigated asymmetric games in interacting communities,Wesson and Rand [24] studied Hopf bifurcations in two-strategy delayed replicator dynamics. The authors generalizedthe model presented in [16], they studied asymptotic stability of equilibria and the presence of bifurcations. Some veryspecific examples of three-player games with strategy-dependent time delays were studied in [28], a shift of interior ∗ Electronic address: [email protected] † Electronic address: [email protected] a r X i v : . [ q - b i o . P E ] F e b equilibria was observed.Here we present a systematic study of effects of strategy-dependent time delays on the long-run behavior of two-player games with two strategies in models which are generalizations of the one in [17]. We consider Stag-hunt-typegames with two basins of attraction of pure strategies, Snowdrift-type games with a stable interior equilibrium (acoexistence of two strategies), and the Prisoner’s Dilemma game. We report a novel behavior. We show that stationarystates depend continuously on time delays. Moreover, at certain time delays, an interior stationary state may disappearor there may appear another interior stationary state.Below we present a general theory and particular examples. In the Appendix we provide proofs and additionaltheorems, in particular general conditions for the existence and uniqueness of interior states in Snowdrift-type games(Theorems A.3 and A.6). II. METHODSA. Replicator dynamics
We assume that our populations are haploid, that is the offspring have identical phenotypic strategies as theirparents. We consider symmetric two-player games with two strategies, C and D , given by the following payoff matrix: C DC a bU = D c d ,where the ij entry, i, j = C, D , is the payoff of the first (row) player when it plays the strategy i and the second(column) player plays the strategy j . We assume that both players are the same and hence payoffs of the columnplayer are given by the matrix transposed to U ; such games are called symmetric.Below we will consider all three main types of two-player games with two strategies: Stag-hunt, Snowdrift, andPrisoner’s Dilemma game. They serve as simple models of social dilemmas. Strategies, C and D, may be interpretedas cooperation and defection.Let us assume that during a time interval of the length ε , only an ε -fraction of the population takes part in pairwisecompetitions, that is plays games. Let p i ( t ), i = C, D, be the number of individuals playing at the time t the strategy C and D respectively, p ( t ) = p C ( t ) + p D ( t ) the total number of players, and x ( t ) = p C ( t ) p ( t ) the fraction of the populationplaying C . Let U C ( t ) = ax ( t ) + b (1 − x ( t )) and U D ( t ) = cx ( t ) + d (1 − x ( t )) (1)be average payoffs of individuals playing C and D respectively.Now we would like to take into account that individuals are born some units of time after their parents played. Weassume that time delays depend on strategies and are equal to t − τ C or t − τ D respectively.We propose the following equations: p i ( t + ε ) = (1 − ε ) p i ( t ) + εp i ( t − τ i ) U i ( t − τ i ); i = C, D, (2) p ( t + ε ) = (1 − ε ) p ( t ) + ε (cid:16) p C ( t − τ C ) U C ( t − τ C ) + p D ( t − τ D ) U D ( t − τ D ) (cid:17) . (3)In the above we assume that populations are large in order to justify a continuous description of population sizesbut they are not infinite as in classical replicator equations. The parameter ε represents the time length of gameinteractions and therefore it multiplies the second terms in the above equations. We assume the replacement of parentsor equivalently a death rate 1 and hence p i ( t ) and p ( t ) are multiplied by 1 − ε .We divide (2) by (3) for i = A , obtain the equation for x ( t + ε ) ≡ x C ( t + ε ), subtract x ( t ), divide the difference by ε , take the limit ε →
0, and get an equation for the frequency of the first strategy,d x d t = p C ( t − τ C ) U C ( t − τ C )(1 − x ( t )) − p D ( t − τ D ) U D ( t − τ D ) x ( t ) p ( t ) (4)which can also be written asd x d t = x ( t − τ C ) p ( t − τ C ) U C ( t − τ C ))(1 − x ( t )) − (1 − x ( t − τ D )) p ( t − τ D ) U D ( t − τ D ) x ( t ) p ( t ) . (5)Let us notice that unlike in the standard replicator dynamics, the above equation for the frequency of the firststrategy is not closed, there appears in it a variable describing the size of the population at various times. One needsequations for populations sizes. From (2) and (3) we getd p i ( t )d t = − p i ( t ) + p i ( t − τ i ) U i ( t − τ i ); i = C, D, (6)d p ( t )d t = − p ( t ) + (cid:16) p C ( t − τ C ) U C ( t − τ C ) + p D ( t − τ D ) U D ( t − τ D ) (cid:17) . (7)To trace the evolution of the population, we have to solve the system of equations, ((5),(7)), together with initialconditions on the interval [ − τ M , τ M = max (cid:8) τ C , τ D (cid:9) . We assume that x ( t ) = ϕ x ( t ) , p ( t ) = ϕ p ( t ) , for t ∈ [ − τ M , . (8)We have the following proposition concerning the existence of non-negative solutions. Proposition 1.
If the initial functions ϕ x and ϕ p are continuous on [ − τ M , and non-negative, then there exists aunique non-negative solution of the system ( (5) , (7) ) with initial conditions (8) well defined on the interval [0 , + ∞ ) .Proof. The local existence of the solution follows immediately from a standard theory of delay differential equations,[14]. The non-negativity follows from [29]. To prove the global existence it is enough to use the step method and toobserve that on the interval (cid:2) , τ M (cid:3) the system ((5), (7)) becomes a system of non-autonomous ordinary differentialequations. The equation for p becomes linear, it can be solved and then the equation for x becomes linear with respectto x ( t ). B. Stationary states
We derive here an equation for an interior stationary state (the stationary frequency of the first strategy). Letus assume that there exists a stationary frequency ¯ x such that x ( t ) = ¯ x for all t ≥ p ( t ). Then average payoffs of each strategy are constant and are equal to¯ U C = a ¯ x + b (1 − ¯ x ) and ¯ U D = c ¯ x + d (1 − ¯ x ) . (9)Thus, equation (7) becomes a linear delay differential equation,d p d t = ¯ x (cid:16) a ¯ x + b (1 − ¯ x ) (cid:17) p ( t − τ C ) + (1 − ¯ x ) (cid:16) c ¯ x + d (1 − ¯ x ) (cid:17) p ( t − τ D ) − p ( t ) . (10)Note, that solutions of (10) with non-negative initial conditions are non-negative. This implies that the leadingeigenvalue of this equation is real. The eigenvalues λ of (10) satisfy λ + 1 = ¯ x ¯ U C e − λτ C + (1 − ¯ x ) ¯ U D e − λτ D . (11)Assume now that λ is a solution of (11) (of course λ depends on ¯ x ) and p ( t ) = p exp( λt ) for some p .We plug such p into (4) and we get two stationary solutions of (4) (i.e. such that the right-hand side of (4) is equalto 0), ¯ x = 0 ,
1, and possibly interior ones - solutions to¯ U C e − λτ C = ¯ U D e − λτ D . (12)If τ C = τ D and d < b < a < c , then (12) gives us a mixed Nash equilibrium (an evolutionarily stable strategy, [1, 2])of the game, x ∗ = b − db − d + c − a which represents the equilibrium fraction of an infinite population playing C [3, 5]. In the non-delayed replicatordynamics, x ∗ is globally asymptotically stable and there are two unstable stationary states: x = 0 and x = 1.If τ C (cid:54) = τ D , then λ = ln( ¯ U C / ¯ U D ) / ( τ C − τ D ). We plug it into (11) and we conclude that ¯ x satisfies an equation F (¯ x ) = 0, where F ( x ) = 1 τ C − τ D ln (cid:18) ax + b (1 − x ) cx + d (1 − x ) (cid:19) + 1 − (cid:16) cx + d (1 − x ) (cid:17) τ C / ( τ C − τ D ) (cid:16) ax + b (1 − x ) (cid:17) τ D / ( τ C − τ D ) . (13)From the above it follows the following proposition that explains in what sense ¯ x is a stationary state of the replicatordynamics ((5), (7)). Proposition 2.
Assume that τ C (cid:54) = τ D . Let ¯ x be a solution to F ( x ) = 0 , where F is defined by (13) and let ¯ λ = 1 τ C − τ D ln (cid:18) a ¯ x + b (1 − ¯ x ) c ¯ x + d (1 − ¯ x ) (cid:19) . Then the functions x ( t ) = ¯ x, p ( t ) = p e ¯ λt , t ≥ are solutions of the system ( (5) , (7) ) with the initial conditions ϕ x ( t ) = ¯ x, ϕ p ( t ) = p e ¯ λt , for t ∈ [ − τ M , . In this paper, we consider only examples of games with a positive λ , that is our populations grow exponentiallywith time hence the use of differential equations to describe time evolution is appropriate. For a negative λ , thepopulation gets extinct.Further properties of the function F related to the existence of zeros of this function in the interval (0 ,
1) areanalyzed in Subsection 1 in the Appendix. In particular, we provide in Theorems A.3 and A.6 general conditions forthe existence and uniqueness of the interior stationary state in Snowdrift-type games.
III. RESULTSA. Stag-hunt games
Here we consider games with a unique unstable interior equilibrium. We begin with a general Stag-hunt game givenby the following payoff matrix: U = (cid:20) a βa βa (cid:21) , where a > β ∈ (0 , F takes the form F ( x ) = 1 + ατ C ln xβ − aβ α x − α , α = τ C τ C − τ D . (14)Note that if ¯ x ∈ (0 ,
1) is an interior stationary frequency, then the leading eigenvalue is given by the followingformula (see Proposition 2), ¯ λ = 1 τ C − τ D ln ¯ xβ . (15)Thus ¯ λ > • τ C > τ D and ¯ x ∈ ( β, • τ C < τ D and ¯ x ∈ (0 , β ).For τ C > τ D it is easy to see that F is increasing from −∞ to F (1) = 1 − aβ α . Thus, the interior stationary frequency¯ x ∈ (0 ,
1) exists if and only if F (1) > β < a − /α . The leading eigenvalue ispositive if ¯ x > β , which is equivalent to the inequality F ( β ) <
0. This gives a condition aβ >
1. Finally, we get herea necessary condition for the existence of an interior stationary frequency with a positive leading eigenvalue:1 a < β < a /α = ⇒ a > . (16)For τ C < τ D it is easy to see that F is decreasing from ∞ to F (1) = 1 − aβ −| α | . Thus, the interior stationaryfrequency ¯ x ∈ (0 ,
1) exists if and only if F (1) < β < a / | α | . Similarly, theleading eigenvalue is positive if ¯ x < β , which is equivalent to the inequality F ( β ) <
0. This gives a condition aβ > a > x using the Lambert W function W p . Namely we solve F ( x ) = 0, where F is given by (14), and after some algebraic caluculations we get¯ x = (cid:32) τ D aβ τCτC − τD W p ( τ D βae τ D ) (cid:33) τC − τDτD (17)For a = 5 and β = 3 / C increases, at certain point, the interior stationary state ceases to exist and therefore the strategy D becomes globallyasymptotically stable (the internal equilibrium hits x = 1, destabilizing this solution). For τ C = τ, τ D = 2 τ we get α = − τ → ∞ one obtains ¯ x = (cid:113) βa . For the classical Stag-hunt game,¯ x = √ ≈ .
346 as it is seen in Panel A.Panel D of Fig. 1 indicates the existence of a curve τ ∗ D ( τ ∗ C ) such that for τ C > τ ∗ C and fixed τ D = τ ∗ D , there is nointerior stationary frequency and for τ C < τ ∗ C there is a unique one. The curve τ ∗ D ( τ ∗ C ) that splits the plane ( τ C , τ D )into two regions (below this curve there is no interior stationary frequency, above it there is a unique one) is given bythe following formula, τ ∗ D = τ ∗ C (cid:18) βW p ( aτ ∗ C e τ ∗ C ) − τ ∗ C (cid:19) , τ ∗ C ≥ − ln βaβ − . (18)The formula can be easily obtained from F (1) = 0 (detailed calculations can be found in Subsection 2 in theAppendix). For the classical Stag-hunt game, the curve given by (18) is almost a straight line. Its derivative withrespect to τ ∗ C changes from aβ − aβ − − aβ ln β ≈ . τ ∗ C = − ln βaβ − ≈ .
255 to 1 + ln β ln a ≈ . τ ∗ C → ∞ .Note also, that if τ C is fixed, and we take τ D → + ∞ , then we get α → − ax ,which implies that the interior stationary frequency for sufficiently large τ D is close to a .Let us summarize the results. When we parametrize both delays by τ , if the delay of defection is bigger than that ofcooperation, then in the limit of infinite τ , the interior unstable equilibrium tends to some value, both strategies havesome basin of attraction, see Panel A. However, if the delay of cooperation is bigger than that of defection, then abovesome critical τ , the unstable interior equilibrium ceases to exist and the defection becomes globally asymptoticallystable, see Panel C. Under no circumstances cooperation becomes globally asymptotically stable. x ∗ τ C = τ , τ D = τ A τ s t a ti on a r y fr e qu e n c i e s τ D x ∗ B τ C . . . . x ∗ τ C = τ , τ D = τ C τ s t a ti on a r y fr e qu e n c i e s . D τ C τ D ¯ x FIG. 1: Numerical solutions of F ( x ) = 0 for the matrix U for the classical Stug-hunt game, that is for a = 3 and β = 3 / A : the interior stationary state as a function of τ , when τ C = τ , τ D = 2 τ . Panel B : the interior stationary state as afunction of τ C , while τ D = 4 is fixed. Panel c : the interior stationary state as a function of τ , when τ C = 2 τ , τ D = τ . Panel D :the interior stationary state as a function of τ C and τ D . The dash-dotted line indicates the cross-section presented on Panel A , the black dotted line indicates the cross-section presented on Panel B , while the dotted line indicates the cross-sectionpresented on Panel C . B. Snowdrift-type games
We consider here two examples of games with a unique stable interior equilibrium state. Because of nonlinearity ofour equations we could not treat analytically general cases.
Example 1
Here we discuss a classical Snowdrift game (with the reward equal to 5 and the cost equal to 2) with the followingpayoff matrix: U = (cid:20) (cid:21) We have that x ∗ = 0 .
75 is the stable stationary state of the non-delayed replicator dynamics.When delays are parametrized in the following form, τ C = τ, τ D = 2 τ and τ increases, then the interior stationarystate increases until it disappears at some value of τ . Above that point, x = 1, the equilibrium when all playerscooperate, becomes globally asymptotically stable (see Fig. 2A). In the case of τ C = 2 τ, τ D = τ , the interior stationarystate decreases and approaches the value √ ≈ .
37 (see Fig. 2C).We see in Fig. 2B, that for fixed τ D = 2, there is a value of τ C below which there is no interior stationary point; x = 1 is globally asymptotically stable. Above this point, the stable interior stationary state decreases.In Fig. 2D we present the unstable interior stationary state as function of τ C and τ D . If τ C = τ D , then the stablestationary frequency does not depend on time delay and is equal to x ∗ = 0 .
75. For fixed τ D , if τ C (cid:29) τ D , the stationarystate decreases approaching the value 0 . τ C → + ∞ in (13)). Similarly,analyzing the graph of the function F for τ D = 0, one can easily find that the stationary state decreases in this casefrom 0 .
75 for τ C = 0 to 0 . τ C → + ∞ . On the other hand, if we fix τ D , then the stationary state is a decreasingfunction of τ C (see Fig. 2B). Additionally, if τ D is large enough (greater than ln(5 / /
3, see Remark A.10 in theAppendix), then there exists a value τ ∗ C of τ C , such that for τ C < τ ∗ C there exists no interior stable frequency and allindividuals are playing C. The critical value τ ∗ D = ln for τ C = 0 depends on τ C almost linearly (see the implicit . . . . τ C = τ , τ D = τ A τ s t a ti on a r y s t a t e s τ D x ∗ B τ C . . . . τ C = τ , τ D = τ C τ s t a ti on a r y s t a t e s . D τ C τ D FIG. 2: Numerical solutions of F ( x ) = 0 for the matrix U for the classical Snowdrift game. Panel A : the unstable interiorstationary state as a function of τ , when τ C = τ , τ D = 2 τ . Panel B : the interior stationary state as a function of τ C , while τ D = 2 is fixed. Panel C : the interior stationary state as a function of τ , when τ C = 2 τ , τ D = τ . Panel D : the interiorstationary state as a function of τ C and τ D . The yellow dash-dotted line indicates the cross-section presented on Panel A , theblack dotted line indicates the cross-section presented on Panel B , while the dotted line indicates the cross-section presentedon Panel C . formula (A.5) in the Appendix for the relation between τ ∗ C and τ ∗ D ).We see that the situation here is in a sense a reverse one to that in the Stag-hunt game. When we parametrize bothdelays by τ , if the delay of defection is bigger than that of cooperation, then above some critical τ , the stable interiorequilibrium ceases to exist and the cooperation becomes a globally asymptotically stable equilibrium, see Panel A.However, if the delay of cooperation is bigger than that of defection, then in the limit of infinite τ , the stable interiorequilibrium tends to some value, both strategies cooexist, see Panel C. Under no circumstances defection becomesglobally asymptotically stable. Example 2
Here we study another particular example which illustrates a complex behaviour of games with asymmetric timedelays. The game has the following payoff matrix: U = (cid:20) . .
95 0 (cid:21)
Now, x ∗ ≈ .
345 is the stable stationary state of the non-delayed replicator dynamics.In Fig 3A we set τ C = τ , τ D = 2 τ . We see that there exists a threshold τ ∗ ≈ . τ < τ ∗ there exists aunique interior stationary state. Numerical simulations suggest that this state is stable. For τ > τ ∗ and τ < τ ∗ ∗ thereare two interior stationary states, one of them stable, the other one unstable. Notice that x = 1 is another stableequilibrium. Numerical simulations suggest that if the initial frequency of the strategy C is large enough, then the Dstrategy is eliminated. Let us also observe that there two asymptotically stable equilibria in the system. Cooperationbecomes an alternative equilibrium, then the only equilibrium when both delays increase.In Fig. 3B we present the stable interior stationary state as a function of τ , τ C = 2 τ , τ D = τ . In this case, thestationary state is almost constant. In fact, for this set of parameters, one can easily see that the function F isdecreasing to one as both terms of F decrease. Because only the first term depends on τ (and it decreases withincreasing τ ), one can deduce that the stationary state is a decreasing function of τ . For τ → + ∞ it converges to . τ C = τ , τ D = τ A τ s t a ti on a r y s t a t e s τ C = τ , τ D = τ B τ FIG. 3: Numerical solutions of F ( x ) = 0 for the matrix U : the stable interior stationary state as a function of τ , when τ C = τ , τ D = 2 τ (Panel A ), and when τ C = 2 τ , τ D = τ (Panel B ). the positive solution of the quadratic equation (2 . x ) − . x − . . x ∈ [0 . , . C. Prisoner’s Dilemma game
Here we consider the Prisoner’s Dilemma game with the following payoff matrix: U = (cid:20) (cid:21) . The function F takes the following form, F ( x ) = 1 τ C − τ D ln (cid:18) x x + 1 (cid:19) + 1 − x (cid:18) x + 13 x (cid:19) τCτC − τD . (19)It is easy to see that lim x → + F ( x ) < τ C > τ D and lim x → + F ( x ) > τ C < τ D . However, the sign of F (1) = 1 − τ C − τ D ln 53 − (cid:18) (cid:19) τCτC − τD (20)cannot be easily determined. Plots of the function F suggest that it has no zeros inside the interval [0 ,
1] for τ C > τ D and it can have one for some set of delays τ C < τ D . However, we can deduce that F (1) > τ D is sufficiently large,thus then there exists an interior stationary frequency. In the Panel A of Fig. 4 we plotted the value of this interiorstationary frequency for τ D = 1. The limiting values of τ ∗ C and τ ∗ D are given by the implicit formula F (1) = 0 whichcan be solved, τ ∗ D = τ ∗ C − ln ln (cid:16) τ ∗ C W L (cid:0) τ ∗ C e τ ∗ C (cid:1)(cid:17) . (21)The curve is plotted in Panel D of Fig. 4 (the solid line). It is almost a straight line with the slope given by1 − ln ln (cid:16) τ ∗ C W L (cid:0) τ ∗ C e τ ∗ C (cid:1)(cid:17) → ln 5ln 3 ≈ . asτ ∗ C → ∞ (22)On the other hand, τ ∗ D → log ≈ .
255 as τ ∗ C →
0. If a point ( τ C , τ D ) is above the curve ( τ ∗ C , τ ∗ D ), then thereexists an interior stationary state (which is unstable as suggested by numerical simulations). This means that thesystem is bistable and that it goes to all cooperation or all defection depending on the initial condition. . . . .
81 0.459 τ D = A τ C s t a ti on a r y fr e qu e n c i e s . . . . τ C = B τ D s t a ti on a r y fr e qu e n c i e s D τ ∗ C τ ∗ D . C τ C τ D ¯ x FIG. 4: Numerical solutions of F ( x ) = 0 for the matrix U for the classical Prisoner’s Dilemma game. Panel A : interiorstationary state as a function of τ C , when τ D = 1 is fixed. The interior stationary state exists for τ C < . B : theinterior stationary state as a function of τ D , while τ C = 0 is fixed. An interior stationary state exists for τ D > . C : the interior stationary state as a function of τ C and τ D . The dotted line indicates the cross-section presented on Panel A . Panel D : the region of delays for which there exists the interior stationary state and the region where there is no interiorstationary state. The solid curve denotes the border between these two regions. The curve is almost a straight line (the dashedline), plotted for a comparison, it has the same slope as the solid curve for τ ∗ C → + ∞ ). In the Panel C of Fig. 4 we presented the interior stationary frequency for various τ (cid:48) C s and τ (cid:48) D s .If τ C = 0, then we have F ( x ) = 1 τ D ln 4 x + 13 x + 1 − x (23)which is a decreasing function that has a zero inside the interval [0 ,
1] for τ D > ln . A numerical solution of theequation 1 τ D ln 4 x + 13 x + 1 − x = 0 (24)is plotted in the Panel B of Fig. 4.We see that when one introduces asymmetric delays with a delay of cooperation smaller than a delay of defec-tion, then there appears an interior unstable equilibrium. Hence the state with just cooperators becomes locallyasymptotically stable. IV. DISCUSSION
We studied effects of strategy-dependent time delays on stationary states of evolutionary games.Recently, effects of the duration of interactions between two players on their payoffs, and therefore on evolutionaryoutcomes, were discussed by Kˇrivan and Cressman [30]. In their models, the duration of interactions depend on the0strategies involved. This can be interpreted as strategy-dependent time delays. They showed that interaction timeschange stationary states of the system.Another approach is to consider ordinary differential equations with time delays. It was pointed out in [17] that inthe so-called biological model, where it is assumed that the number of players born in a given time is proportional topayoffs received by their parents at a certain moment in the past, the interior state is asymptotically stable for anytime delay. In such models, microscopic interactions lead to a new type of replicator dynamics which describe thetime evolution of fractions of the population playing given strategies and the size of the population.Here we studied biological-type models with strategy-dependent time delays. We observed a novel behavior. Weshowed that interior stationary states depend continuously on time delays.In particular, in games with two pure Nash equilibria (Stag-hunt-type games), the bigger the time delay of a givenstrategy is, the smaller its basin of attraction.When we parametrize both delays by τ , if the delay of defection is bigger than that of cooperation, then inthe limit of infinite τ , the interior unstable equilibrium tends to some value, both strategies have some basin ofattraction. However, if the delay of cooperation is bigger than that of defection, then above some critical τ , theunstable interior equilibrium ceases to exist and the defection becomes globally asymptotically stable. Under nocircumstances cooperation becomes globally asymptotically stable.In games with a stable interior equilibrium, the bigger the time delay of a strategy is, the less frequent is in thepopulation. Moreover, at certain time delays, the interior stationary state ceases to exist or there may appear anotherinterior stationary state.We see that the situation here is in a sense a reverse one to that in the Stag-hunt game. When we parametrize bothdelays by τ , if the delay of defection is bigger than that of cooperation, then above some critical τ , the stable interiorequilibrium ceases to exist and the cooperation becomes a globally asymptotically stable equilibrium. However, if thedelay of cooperation is bigger than that of defection, then in the limit of infinite τ , the stable interior equilibriumtends to some value, both strategies cooexist. Under no circumstances defection becomes globally asymptoticallystable.In the Prisoner’s Dilemma game, for time delays of cooperation smaller than the time delay of defection, thereappears an unstable interior equilibrium and therefore for some initial conditions, the population converges to thehomogeneous state with just cooperators. This shows an asymptotic stability of cooperation in a simple model ofsocial dilemma.It would be interesting to analyze strategy-dependent time delays in stochastic dynamics of finite populations. Thework in this direction is in progress. Appendix
We provide here propositions, theorems, and their proofs which support results presented in the paper.The main results for Snowdrift-type games are included in Theorem A.3 and following Corollary A.4. We presentthere a set of conditions that guarantee the existence of at least one interior stationary state. In Theorem A.6 wegive a condition for the monotonicity of a function which zeros give us stationary states. Finally, in Proposition A.9we state conditions for the absence of stationary states for small time delays.In Remark A.1 we give an explicit formula for the leading eigenvalue λ for pure stationary states, ¯ x = 0 and ¯ x = 1.This eigenvalue determines the growth rate of the whole population. In Remark A.10 we state some results about theexistence of stationary states for d = 0.We provide here also additional calculations concerning Stag-hunt type games.
1. General results for Snowdrift-type games
Interior stationary states are given by zeros of the function F ( x ) defined by (13)Let ¯ U C = a ¯ x + b (1 − ¯ x ) and ¯ U D = c ¯ x + d (1 − ¯ x ) . Remark A.1.
We see that for ¯ x = 0 we have λ + 1 = d e − λτ D = ⇒ λ = W p ( dτ D e τ D ) τ D , (A.1) while for ¯ x = 1 we have λ + 1 = a e − λτ C = ⇒ λ = W p ( aτ C e τ C ) τ C , (A.2)1 where W p is the Lambert W function, that is a principle branch of the relation W p ( x ) exp (cid:0) W p ( x ) (cid:1) = x . We show that if τ C (cid:54) = τ D , then the value of an interior stationary state depends on time delays. Moreover, for somepayoff matrices and some values of a delay, multiple interior stationary states exist. We would like to point out thatthese relations are not linear. Thus, in contrary to the case without time delays or with equal delays (i.e. τ C = τ D ),adding a constant to a column of a payoff matrix or multiplying the matrix by a constant, changes interior stationarystates or even may change the number of interior stationary states. Thus, the intuition from non-delayed games aboutthe asymptotic of dynamics of replicator equations are not necessarily valid for the case of non-equal delays.It turns out, that if the eigenvalue λ (¯ x ) corresponding to the stationary state ¯ x is positive, then the frequency ofthe given strategy is smaller than the frequency of this strategy in a non-delayed case if the delay corresponding tothis strategy is larger than the time delay of the other strategy. We have the following proposition. Proposition A.2.
Let a < c and d < b . Assume that ¯ x ∈ (0 , and let λ be the leading eigenvalue that correspondsto ¯ x . Then if λ > we have • if τ C > τ D , then ¯ x < x ∗ • if τ C < τ D , then ¯ x > x ∗ Proof.
Assume that τ C > τ D . Then the sign of λ is the same as the sign of φ ( x ) = ln (cid:18) ax + b (1 − x ) cx + d (1 − x ) (cid:19) . Note that φ ( x ∗ ) = 0 and φ (cid:48) ( x ) = ad − bc (cid:0) ax + b (1 − x ) (cid:1)(cid:0) cx + d (1 − x ) (cid:1) < ad < bc . Thus, φ ( x ) > x < x ∗ , so ¯ x < x ∗ . Similarly, if τ C < τ D , then the sign of λ is opposite to thesign of φ ( x ) and therefore ¯ x > x ∗ .We will study general properties of F , they will help us to determine a number of stationary states of our replicatordynamics. First, we determine conditions that would imply the sign of F at x = 0 and x = 1. Let us define c ∗ ( a ) = a − τ D /τ C (cid:32) W p (cid:0) aτ C e τ C (cid:1) τ C (cid:33) − τ D /τ C , d ∗ ( b ) = b − τ D /τ C (cid:32) W p (cid:0) bτ C e τ C (cid:1) τ C (cid:33) − τ D /τ C , (A.3)where W p is the Lambert W function, that is a principle branch of the relation W p ( x ) exp (cid:0) W p ( x ) (cid:1) = x . Theorem A.3. (A) If a < c , then F (1) > if and only if one of the following condition holds, (i) τ C > τ D , a < and c < c ∗ ( a )(ii) τ C < τ D , a < or c > c ∗ ( a ) (B) If b > d , then F (0) > if and only if one of the following condition holds, (i) τ C > τ D , b < or d < b ∗ ( b )(ii) τ C < τ D , b < and d > b ∗ ( b ) Proof.
First we study the sign of F (1). It is easy to see that F (1) = 1 τ C − τ D ln (cid:16) ac (cid:17) + 1 − a (cid:18) ca (cid:19) τ C / ( τ C − τ D ) . Assume that τ C > τ D . Due to the assumption a < c , there exists z ∈ (0 ,
1) such that a = zc . We plug this into theexpression for F (1) and we obtain 1 τ C − τ D ln z + 1 − a (cid:18) z (cid:19) τ C / ( τ C − τ D ) . (A.4)2Let us introduce u = z τ C / ( τ C − τ D ) . Then the above expression simplifies to f a ( u ) = 1 τ C ln u + 1 − au . It is easy to see that f a is a continuous, strictly increasing function of u and lim u → + f a ( u ) = −∞ . Thus we have twopossibilities. Either f a (1) < f a ( u ) < u ∈ (0 ,
1) (thus F (1) < a < c ) or there exits a unique u ∗ ∈ (0 ,
1) such that f a ( u ∗ ) = 0 ( f a ( u ) < < u < u ∗ and f a ( u ) > u ∗ < u <
1) due to monotonicity (andthis implies an appropriate sign of F (1) depending on the value of a with respect to c ).Note that f a (1) ≤ a ≥
1. Assume now that a < u ∗ . We have1 τ C ln u ∗ + 1 = au ∗ ⇐⇒ τ C − aτ C u ∗ = ln 1 u ∗ ⇐⇒ aτ C u ∗ e aτ C u ∗ = aτ C e τ C . Thus, u ∗ = aτ C W p (cid:0) aτ C e τ C (cid:1) , where W p is the Lambert function. Because ac = u ( τ C − τ D ) /τ C , then F (1) ≤ a ≥ c > c ∗ ( a ) where c ∗ ( a ) is givenby (A.3). Thus, if a > c < c ∗ ( a ), then F (1) > τ C < τ D , thenthe variable u changes from 1 to + ∞ instead of from 0 to 1 and very similar arguments lead to the assertion of thepoint (ii).Let us calculate the value F (0) = 1 τ C − τ D ln (cid:18) db (cid:19) + 1 − b (cid:18) bd (cid:19) τ C / ( τ C − τ D ) . Note that this situation is analogous to the previous one and analogues arguments prove this part of the theorem.From Theorem A.3 and its proof the following corollary can be easily deduced.
Corollary A.4. (A) If a < c , then F (1) < if and only if one of the following condition holds: (i) τ C > τ D , a ≥ or c > c ∗ ( a )(ii) τ C < τ D , a ≥ and c < c ∗ ( a ) (B) If b > d , then F (0) > if and only if one of the following condition holds, (i) τ C > τ D , b ≥ and d > b ∗ ( b )(ii) τ C < τ D , b ≥ or d < b ∗ ( b ) Corollary A.5.
Theorem A.3 and Corollary A.4 give conditions that guarantee the existence of at least one in-terior stationary state. Namely, if a < c and b > d , then we need F (1) > (Theorem A.3 (A) ) and F (0) < (Corollary A.4 (B) ) or, reversely, F (1) < (Corollary A.4 (A) ) and F (0) > (Theorem A.3 (B) ). If the function F is monotonic, then the previous theorem gives us a condition guaranteeing the existence of asolution of F ( x ) = 0 on (0 , Theorem A.6. If a < c , d < b , and additionally a < d < c or d < a < b , then the function F is monotonic.Moreover, if additionally τ C > τ D , then it is decreasing and if τ C < τ D , then it is increasing.Proof. It is enough to calculate the first derivative of F . It reads F (cid:48) ( x ) = 1 τ C − τ D (cid:32) ad − bc (cid:0) ax + b (1 − x ) (cid:1)(cid:0) cx + d (1 − x ) (cid:1) ++ (cid:18) τ C ( d − c ) + τ D ( a − b ) cx + d (1 − x ) ax + b (1 − x ) (cid:19)(cid:18) cx + d (1 − x ) ax + b (1 − x ) (cid:19) τ D / ( τ C − τ D ) (cid:33) The assumptions guarantee that ad − bc < d − c < a − b <
0, and the thesis follows.3
Corollary A.7. If a = b or c = d , then the function F is monotonic and there is at most one solution of F ( x ) = 0 in the interval [0 , . Remark A.8.
Theorems A.3 and A.6 give a complete description of the existence of the unique stationary state ¯ x inside the interval (0 , if a < d < c or d < a < b (under the assumption that a < c , d < b ). Now, we give a condition that guarantees the existence of the interior stationary point ¯ x when τ D is fixed and τ C converges to zero. Proposition A.9.
Assume that < b < a < c and d = 0 . If one of the following conditions holds, (a) τ D ≤ ba ( a − b ) , a > , and τ D > ln (cid:0) ca (cid:1) a − , (b) τ D > ba ( a − b ) and b > , then for τ C < τ D close enough to , there exists no interior equilibrium state ¯ x ∈ (0 , .Proof. We check if the equation F ( x ) = 0, where F is given by (13), has a solution ¯ x ∈ (0 ,
1) for fixed τ D and a small τ C . It is easy to see that the function F is continuous with respect to τ C and x for τ C < τ D . Thus, it is enough tocheck the existence of a solution to F ( x ) = 0 for τ C = 0. In this case, the function F simplifies to F ( x ) = 1 τ D ln (cid:18) cxax + b (1 − x ) (cid:19) + 1 − (cid:0) ax + b (1 − x ) (cid:1) . We calculate the derivative of F , F (cid:48) ( x ) = bτ D · x (cid:0) ( a − b ) x + b (cid:1) − ( a − b ) . We see that it is a decreasing function of x ∈ (0 ,
1) (as a > b ) and therefore F is concave. Moreover, lim x → F (cid:48) ( x ) = + ∞ thus1. if τ D ≤ ba ( a − b ) , then F is increasing in (0 , τ D > ba ( a − b ) , then F has exactly one maximum at (0 ,
1) at the point x max = b a − b ) (cid:18)(cid:114) bτ D − (cid:19) . Let us consider the first case. Here F (cid:48) ( x ) > x ∈ (0 ,
1) because it is decreasing and F (cid:48) (1) >
0. Thus F is anincreasing function of x and as lim x → F ( x ) = −∞ it has a zero in the interval (0 ,
1) if and only if F (1) >
1. An easycalculation allows to derive the condition (a) .Now let us consider the second case. Some algebraic manipulations lead to F ( x max ) = 1 τ D (cid:18) ln ca − b + ln (cid:18) bτ D (cid:19) − (cid:18) (cid:114) bτ D (cid:19)(cid:19) + 1 − b (cid:18) (cid:114) bτ D (cid:19) . We show that F ( x max ) approaches its supremum either for τ D → + ∞ or for τ D → ba ( a − b ) . Let us introduce a newvariable z = bτ D and denote ˜ c = ca − b . Now, writing F ( x max ) in variable z we have F ( x max ) = h ( z ) = b z (cid:16) ln c + ln z − (cid:0) √ z (cid:1)(cid:17) + 1 − b (cid:16) √ z (cid:17) , where 0 < z < a − b ) bb as τ D > b ( a − b ) b . We calculate the derivative of h with respect to z and obtain h (cid:48) ( z ) = b (cid:18) cz (1 + √ z ) (cid:19) . h (cid:48) has at most one zero for z > z close to 0. Hence, the function h is decreasing for small z and it may increase for large z having at most one minimum for z >
0. Thus it approachesits supremum either for z → τ D → + ∞ ) or for z → a − b ) bb . In the latter case we have τ D → ba ( a − b ) = ⇒ x max → , and we arrive at the case considered earlier. On the other hand,lim z → h ( z ) = 1 − b < (b) holds. Thus, F ( x ) < x ∈ (0 ,
1) and no interior stationary state exists.
Remark A.10.
Note that if τ D ≤ ba ( a − b ) and either a < or τ D < ln (cid:0) ca (cid:1) a − , then there exists exactly one interiorstationary state ¯ x ∈ (0 , for τ C < τ D close enough to . On the other hand, if τ D > ba ( a − b ) and b < , then thereexist one or two interior stationary states ¯ x ∈ (0 , for τ C < τ D close enough to if τ D is large enough. In fact, if a < for a sufficiently large τ D , then there exist two stationary states ¯ x ∈ (0 , . If τ C is small enough (for a fixed τ D ) or, reversely, if τ D is large enough (for a fixed τ C ), there exists no interiorstationary state. Thus, it is possible to calculate these threshold values τ ∗ C and τ ∗ D . It can be seen that the interiorstationary state disappears when it merges with the stationary state 1. Thus, looking for τ ∗ C and τ ∗ D such that F (1) = 0, after some algebraic calculations we obtain the implicit formula τ ∗ C = − ( τ ∗ D − τ ∗ C ) ln (cid:16) ln( c/a )4( τ ∗ D − τ ∗ C ) + a (cid:17) ln( c/a ) . (A.5)We have that for τ C < τ ∗ C or for τ D > τ ∗ D there exists no interior stationary state.
2. Calculations concerning Stag-hunt type game
In this part of Appendix we provide some details of calculations that allowed us to formulate statements about theslope of the curve ( τ ∗ C , τ ∗ D ) given by formula (18) in the paper. Let us remind this formula here, τ ∗ D = τ ∗ C (cid:18) βW p ( aτ ∗ C e τ ∗ C ) − τ ∗ C (cid:19) . (A.6)First note, that τ ∗ D > τ ∗ C > τ ∗ A, , where τ ∗ A, = − ln βaβ − . (A.7)We also have τ ∗ D = 0 for τ ∗ C = τ ∗ A, . We have then W p ( aτ ∗ A, e τ ∗ A, ) = τ ∗ A, − ln β. (A.8)We calculate two quantities:1. the derivative of the right-hand side of (A.6) with respect to τ ∗ C at the point τ ∗ A, ;2. the slope of the right-hand side of (A.6) for τ ∗ C → + ∞ .Here we calculate g (cid:48) ( τ ∗ A, ), where g ( τ ∗ C ) = τ ∗ C (cid:18) βW p ( y ( τ ∗ C )) − τ ∗ C (cid:19) , y ( τ ∗ C ) = aτ ∗ C e τ ∗ C . g (cid:48) ( τ ∗ A, ) = (cid:32) βW p ( y ( τ ∗ A, )) − τ ∗ A, (cid:33) − τ ∗ A, ln β (cid:0) W p ( y ( τ ∗ A, )) − τ ∗ A, (cid:1) (cid:16) W (cid:48) p ( y ( τ ∗ A, )) y (cid:48) ( τ ∗ A, ) − (cid:17) . (A.9)Because of (A.8), the first parenthesis of (A.9) is equal to 0. Moreover, W p ( y ( τ ∗ A, )) − τ ∗ A, = − ln β . Thus, theformula (A.9) simplifies to g (cid:48) ( τ ∗ A, ) = − τ ∗ A, ln β (cid:16) W (cid:48) p ( y ( τ ∗ A, )) y (cid:48) ( τ ∗ A, ) − (cid:17) . (A.10)Now we calculate the expression in the parenthesis. In order to shorten the notation we write y and y (cid:48) omittingdependence of y on τ A, . Using (A.8) and (A.7) we get W (cid:48) p ( y ) = W p ( y ) y (1 + W p ( y )) = − lnβaβ − − ln βy aβ − − aβ ln βaβ − = − aβ ln βy (cid:0) aβ − − aβ ln β (cid:1) = aβ − aβ − − aβ ln β β aβ . because y = aτ A, e τ ∗ A, = − a ln βaβ − β − aβ . On the other hand we have y (cid:48) ( τ ∗ A, ) = (cid:0) τ ∗ A, (cid:1) e τ ∗ A, = β − − ln βaβ − β − aβ . Plugging formulas for W (cid:48) p ( y ) and y (cid:48) into (A.10) we get the final formula g (cid:48) ( τ ∗ A, ) = aβ − aβ − − aβ ln β . (A.11)In order to calculate the slope of the right-hand side of (A.6) for τ ∗ C → + ∞ we note (see [31, Eq. 4.13.10]) that W p ( y ) = ln y − ln ln y + r ( y ) , where r ( y ) → y → + ∞ . We have then W p (cid:16) aτ ∗ C e τ ∗ C (cid:17) − τ ∗ C = ln (cid:16) aτ ∗ C e τ ∗ C (cid:17) − ln ln (cid:16) aτ ∗ C e τ ∗ C (cid:17) + r (cid:16) aτ ∗ C e τ ∗ C (cid:17) − τ ∗ C = ln a + ln τ ∗ C τ ∗ C + ln a + ln τ ∗ C + r (cid:16) aτ ∗ C e τ ∗ C (cid:17) → ln a as τ ∗ C → + ∞ . Thus, the slope of the right-hand side of (A.6) for τ ∗ C → + ∞ is equal to 1 + ln β ln a . Acknowledgments
We thank the National Science Centre, Poland, for a financial support under the grant no. 2015/17/B/ST1/00693.We would like to thank two anonymous referees for valuable comments and suggestions which greatly improved ourpaper. [1] J. Maynard Smith and G. R Price, The logic of animal conflict, Nature (London) 246, 15 (1973).[2] J. Maynard Smith,
Evolution and the Theory of Games , Cambridge University Press, Cambridge (1982).[3] J. Weibull,
Evolutionary Game Theory , MIT Press, Cambridge MA (1995).[4] F. Vega-Redondo,
Evolution, Games, and Economic Behaviour , Oxford University Press, Oxford (1996). [5] J. Hofbauer and K. Sigmund, Evolutionary Games and Population Dynamics , Cambridge University Press (1998).[6] H. P. Young,
Individual Strategy and Social Structure: an Evolutionary Theory of Institutions , Princeton University Press,Princeton (1998).[7] R. Cressman, em Evolutionary Dynamics and Extensive Form Games, MIT Press, Cambridge (2003).[8] M. A. Nowak,
Evolutionary Dynamics: Exploring the Equations of Life , Harvard University Press, Cambridge (2006).[9] W. H. Sandholm,
Population Games and Evolutionary Dynamics , MIT Press, Cambridge (2009).[10] P. D. Taylor and L. B. Jonker, Evolutionarily stable strategy and game dynamics, Math Biosci 40, 145-156 (1978).[11] J. Hofbauer, P. Shuster, and K. Sigmund, A note on evolutionarily stable strategies and game dynamics, J. Theor. Biol.81, 609-612 (1979).[12] I. Gy¨ori and G. Ladas,
Oscillation Theory of Delay Differential Equations with Applications , Clarendon (1991).[13] K. Gopalsamy,
Stability and Oscillations in Delay Differential Equations of Population , Springer (1992).[14] Y. Kuang,
Delay Differential Equations with Applications in Population Dynamics , Academic Press Inc. (1993).[15] T. Erneux,
Applied Delay Differential Equations , Springer (2009).[16] Y. Tao and Z. Wang, Effect of time delay and evolutionarily stable strategy, J. Theor. Biol. 187, 111-116 (1997).[17] J. Alboszta and J. Mi¸ekisz, Stability of evolutionarily stable strategies in discrete replicator dynamics with time delay, J.Theor. Biol. 231, 175-179 (2004).[18] H. Oaku, Evolution with delay, Japan Econ Review 53, 114-133 (2002).[19] R. Iijima, Heterogeneous information lags and evolutionary stability, Math Soc Sciences 63, 83-85 (2011).[20] R. Iijima, On delayed discrete evolutionary dynamics, J. Theor. Biol. 300, 1-6 (2012).[21] J. A. Moreira, F. L. Pinheiro, N. Nunes, and J. M. Pacheco, Evolutionary dynamics of collective action when individualfitness derives from group decisions taken in the past, J. Theor. Biol. 298, 8-15 (2012).[22] J. Mi¸ekisz and S. Weso(cid:32)lowski, Stochasticity and time delays in evolutionary games, Dyn. Games Appl. 1, 440-448 (2011).[23] J. Mi¸ekisz, M. Matuszak, and J. Poleszczuk, Stochastic stability in three-player games with time delays, Dyn. Games Appl.4, 489-498 (2014).[24] E. Wesson and R. Rand, Hopf bifurcations in delayed rock-paper-scissors replicator dynamics, Dyn. Games Appl. 6, 139-156(2016).[25] E. Wesson, R. Rand, and D. Rand, Hopf bifurcations in two-strategy delayed replicator dynamics, J Bifurcation Chaos 26,1650006 1-13 (2016).[26] N. Ben Khalifa, R. El-Azouzi, and Y. Hayel, Discrete and continuous distributed delays in replicator dynamics, Dyn.Games App. 8, 713-732 (2018).[27] N. Ben Khalifa, R. El-Azouzi, Y. Hayel, and I. Mabrouki, Evolutionary games in interacting communities, Dyn. GamesAppl. 7, 131-156 (2016).[28] M. Bodnar, J. Mi¸ekisz, and R. Vardanyan, Three-player games with strategy-dependent time delays, Dyn. Games Appl.10, 664–675 (2020).[29] M. Bodnar, On the nonnegativity of solutions of delay differential equations, Appl. Math. Lett., 13, 6, 91–95 (2000).[30] V. Kˇrivan and R. Cressman, Interaction times change evolutionary outcome: Two-player matrix games, J. Theor. Biol.416, 199-207 (2017).[31]