Dodge and survive: modeling the predatory nature of dodgeball
DDodge and survive: modeling the predatory nature of dodgeball
Perrin E. Ruth ∗ and Juan G. Restrepo † Department of Applied Mathematics, University of Colorado at Boulder, Boulder, CO 80309, USA (Dated: September 8, 2020)The analysis of games and sports as complex systems can give insights into the dynamics ofhuman competition, and has been proven useful in soccer, basketball, and other professional sports.In this paper we present a model for dodgeball, a popular sport in US schools, and analyze itusing an ordinary differential equation (ODE) compartmental model and stochastic agent-basedgame simulations. The ODE model reveals a rich landscape with different game dynamics occurringdepending on the strategies used by the teams, which can in some cases be mapped to scenarios incompetitive species models. Stochastic agent-based game simulations confirm and complement thepredictions of the deterministic ODE models. In some scenarios, game victory can be interpreted asa noise-driven escape from the basin of attraction of a stable fixed point, resulting in extremely longgames when the number of players is large. Using the ODE and agent-based models, we constructa strategy to increase the probability of winning.
I. INTRODUCTION
Games and sports are emerging as a rich testbed tostudy the dynamics of competition in a controlled envi-ronment. Examples include the analysis of passing net-works [1, 2] and entropy [3] in soccer games (see also[4] for a discussion on data-driven tactical approaches),scoring dynamics [5–7] and play-by-play modeling [8, 9]in professional sports such as hockey, basketball, foot-ball, and table tennis, penalty kicks in soccer games [10],and serves in tennis matches [11]. Here we explore thedynamics of dodgeball , where the number of players play-ing different roles changes dynamically and ultimatelydetermines the outcome of the game. While modelingdodgeball might seem like a very specific task, it is a rel-atively clean and well-defined system where the ability ofmean-field techniques [12, 13] to describe human compe-tition can be put to the test. In addition, it complementsongoing efforts to quantify and model dynamics in sportsand games [1–11].In this paper we present and analyze a mathematicalmodel of dodgeball based on both agent-based stochas-tic game simulations and an ordinary differential equa-tion (ODE) based compartmental model. By analyzingthe stability of fixed points of the ODE system, we findthat different game dynamics can occur depending onthe teams’ strategies: one of the teams achieves a quickvictory, either team can achieve a victory depending oninitial conditions, or the game evolves into a stalemate.For the simplest strategy choice, these regimes can be in-terpreted in the context of a competitive Lotka-Volterramodel. Numerical simulations of games based on stochas-tic behavior of individual players reveal that the stale-mate regime corresponds to extremely long games withlarge fluctuations. These long games can be interpreted ∗ [email protected] † [email protected] as a noise-driven escape from the basin of attraction ofthe stable stalemate fixed point, and are commonly ob-served in dodgeball games (see Fig. 2). Using both thestochastic and ODE models, we develop a greedy strat-egy and demonstrate it using stochastic simulations.The structure for the paper is as follows. In SectionII we describe the rules of the game we will analyze. InSection III we present and analyze a compartment-basedmodel of dodgeball. In Section IV we present stochasticnumerical simulations of dodgeball games and comparethese with the predictions of the compartmental model.We then discuss the notion of strategy in the context ofthis stochastic model. Finally, we present our conclusionsin Sec. V. II. DESCRIPTION OF DODGEBALL
In this paper we consider the following variant playedoften in elementary schools in the US (sometimes called prison dodgeball ). Two teams (Team 1 and Team 2) of N players each initially occupy two zones adjacent toeach other, which we will refer to as Court 1 and Court X X Y Y Jail 1Jail 2 Court 1 Court 2 X X Y Y Jail 1Jail 2 Court 1 Court 2
FIG. 1. (a) Setup of dodgeball court. Players in team i maketransitions between Court i and Jail i , and Team i loses whenthere are no players in court i . a r X i v : . [ phy s i c s . s o c - ph ] S e p FIG. 2. Evolution of two fifth-grade dodgeball games playedin Eisenhower Elementary in Boulder, Colorado, USA. Thenumber of players in Courts 1 and 2, X and X , fluctuatefor a long time without any team gaining a decisive advantage.The games were eventually stopped and a winner decided onthe spot. Jail , an area behind the oppositeteam’s Court. A player in a Court may also throw a ballto a player of their own Team in their Jail, and if theball is caught, the catching player returns to their Team’sCourt. These processes are illustrated schematically inFig. 3. We denote the number of players on Team i thatare in Court i and Jail i by X i and Y i , respectively. Team i loses when X i = 0. For simplicity, we assume there arealways available balls and neglect the possibility that aplayer catches a ball thrown at them by an enemy player.In practice, games often last a long time without anyof the Teams managing to send all the enemy playersto Jail. Because of this, such games are stopped at apredetermined time and the winner is decided based onother factors (e.g., which Team has more players on theirCourt). An example of this is in Figure 2, which showsthe numbers of players in Courts 1 and 2, X and X ,during two fifth-grade dodgeball games in Eisenhower El-ementary in Boulder, Colorado. The values of X and X seem to fluctuate without any team obtaining decisiveadvantage. The games continued after the time inter-val shown and were eventually stopped. Our subsequentmodel and analysis suggests that this stalemate behav-ior is the result of underlying dynamics that has a stable fixed point about which X and X fluctuate. III. RATE EQUATION DESCRIPTION OFGAME DYNAMICS
We begin our description of the game dynamics byadopting a continuum formulation where the number ofplayers in Courts 1 and 2 are approximated by continuousvariables. These variables evolve following rate equationsobtained from the rates at which the processes describedin the previous section and illustrated in Fig. 3 occur.Since the number of players in a dodgeball game is nottoo large (typically less than 50), and the game is decidedwhen the number of players in a court drops to zero, onemight question the validity of a continuum description.However, as we will see in Sec IV, stochastic simulationswith few players show that the rate equations give usefulinsights about the dynamics of simulated games with afinite number off players.To construct the rate equations, we define λ as themean throw rate of the players. Consequently, team i throws balls at a rate of λX i . We also define F i ( X , X )as the fraction of balls that team i throws that are di-rected at enemy players, p e ( X ) as the probability that aball thrown at X opposing players hits one of them, and p j ( Y ) as the probability that a ball thrown at Y playersin jail is caught. Combining these processes and using Y i = N − X i we get the Dodgeball Equations:˙ X = λX [1 − F ( X , X )] p j ( N − X ) − λX F ( X , X ) p e ( X ) , (1)˙ X = λX [1 − F ( X , X )] p j ( N − X ) − λX F ( X , X ) p e ( X ) . (2)Note that, given the initial conditions X i (0) = N , X i ( t ) ∈ [0 , N ] for all t ≥
0. For simplicity, we as-sume the functions p j and p e to be linear, p j ( Y ) = k j Y and p e ( X ) = k e X . Defining the normalized number of Jail 1Jail 2 Court 1 Court 2 Jail 1Jail 2 Court 1 Court 2Jail 1Jail 2 Court 1 Court 2 Jail 1Jail 2 Court 1 Court 2
FIG. 3. (Top) A player in a Court can be sent to Jail whenhit by a ball from a player in the opposing Court. (Bottom)A player can be saved from Jail when catching a ball thrownby a player from their Court.
Symbol Meaning a i Probability that a player in Team i tries to hit anopponent instead of saving a teammate from jail x i Fraction of players in Team i in Court i c Probability of hitting/probability of savingTABLE I. Notation used in the dodgeball model Equa-tions (5)-(6). players x i = X i /N ∈ [0 ,
1] and the dimensionless time τ = λN k j t , we get the simplified Dodgeball Equations: dx dτ = x (1 − x )[1 − f ( x , x )] − cx x f ( x , x ) , (3) dx dτ = x (1 − x )[1 − f ( x , x )] − cx x f ( x , x ) , (4)where f i ( x , x ) = F i ( N x , N x ) and c = k e /k j > A. Example: fixed strategy
As an illustrative example we will focus on the casewhen the strategy for both teams is fixed over the courseof the game, f i ( x , x ) = a i ∈ (0 , f i (i.e., strategies) in Sec. IV.Inserting f i ( x , x ) = a i into Equations (3)-(4) gives dx dτ = x (1 − x )(1 − a ) − cx x a , (5) dx dτ = x (1 − x )(1 − a ) − cx x a , (6)which is a 2-species competitive Lotka-Volterra system[14]. In this case, we can use known results about this sys-tem to understand the possible game scenarios. Specifi-cally, at τ = 0 the system starts at ( x , x ) = (1 , τ >
0, the solution converges towards one of the stablefixed points of (5)-(6) in the invariant square [0 , × [0 , , , , x ∗ , x ∗ )of the linear system0 = (1 − x )(1 − a ) − cx a , (7)0 = (1 − x )(1 − a ) − cx a . (8)If a a c (cid:54) = (1 − a )(1 − a ) there is a unique solution tothese equations, the fixed point x ∗ = (1 − a )[ a c − (1 − a )] a a c − (1 − a )(1 − a ) , (9) x ∗ = (1 − a )[ a c − (1 − a )] a a c − (1 − a )(1 − a ) . (10)The degenerate case where a a c = (1 − a )(1 − a ) givesa continuum of fixed points described by x ∗ + x ∗ = 1 , (11) x x Stalemate Team 1 winsCompetitive Degenerate x x x x x x FIG. 4. Stream plots of Equations (5-6) with c = 0 . a and a . (Top left) Stalemate : for a =1 / a = 3 /
4, both (0 ,
1) and (1 ,
0) are unstable and ( x ∗ , x ∗ )is stable. (Top right) Team 1 wins : for for a = 9 / a =3 /
4, (1 ,
0) is a stable fixed point while (0 ,
1) is unstable, givingTeam 1 the advantage; note that in this case ( x ∗ , x ∗ ) / ∈ [0 , .(Bottom left) Competitive : for a = 7 / a = 3 /
4, both (0 , ,
0) are stable fixed points, and the winner is determinedby the initial conditions. (Bottom right)
Degenerate : For thespecial case a = a = (1 + c ) − , every point on the line x + x = 1 is a fixed point. when a = (1 − a ) /c and a = (1 − a ) /c , and no solutionotherwise.The fixed point (0 ,
0) corresponds to both teams run-ning out of players, the fixed points (1 ,
0) and (0 ,
1) corre-spond to Team 1 and Team 2 winning, respectively, andthe fixed point ( x ∗ , x ∗ ), when it is stable and in (0 , ,corresponds to a stalemate situation where the numberof players in each court remains constant in time. By an-alyzing the linear stability of the fixed points (see, e.g.,[14]), one finds that the game dynamics can be classifiedin the following cases: • Stalemate.
This occurs when (0 , ,
0) are bothunstable and ( x ∗ , x ∗ ) is in [0 , and is stable,which occurs when a < (1 − a ) /c and a < (1 − a ) /c . In this scenario, the solution settlesin the fixed point ( x ∗ , x ∗ ) and no Team wins in thedeterministic version of the game. The flow corre-sponding to this case is shown in Fig. 4 (top left).This scenario is analogous to the “Stable coexis-tence” of species in the Lotka-Volterra model. • Competitive.
This occurs when (0 , ,
0) are sta-ble and the fixed point ( x ∗ , x ∗ ) is in [0 , and isunstable, which occurs when a > (1 − a ) /c and a > (1 − a ) /c . The stable manifold of ( x ∗ , x ∗ ) actsas a separatrix for the basins of attraction of thefixed points that correspond to victories for Team1 and Team 2. See Fig. 4 (bottom left). This sce-nario is analogous to the “Unstable coexistence” ofspecies in the Lotka-Volterra model. • Team 1 wins.
This occurs when (0 ,
1) is unsta-ble and (1 ,
0) is stable, which occurs when a > (1 − a ) /c and a < (1 − a ) /c . In this scenario,the solution converges towards a victory by Team1. See Fig. 4 (top right). This scenario is analo-gous to the “Competitive exclusion” of species inthe Lotka-Volterra model, in which one species isdriven to extinction by the other. • Team 2 wins.
This occurs when (0 ,
1) is stable and(1 ,
0) is unstable, and is analogous to the
Team 1wins case. In this scenario, the solution convergestowards a victory by Team 2. • Degenerate.
This occurs when there is a continuumof fixed points x ∗ + x ∗ = 1. In this scenario, thesolution converges towards the line x + x = 1, andno winner is produced in the deterministic versionof the game. See Fig. 4 (bottom right).Figure 4 illustrates these different game dynamics byshowing the flow induced by Eqs. (5)-(6) in the region0 ≤ x ≤
1, 0 ≤ x ≤ c > c < a , a ) is dividedinto four regions separated by the lines a = (1 − a ) /c and a = (1 − a ) /c . When both teams preferentially saveplayers of their own team from jail, instead of trying tohit players from the other team (i.e., both a and a aresmall), the game results in a stalemate (we reiterate thatwhen stochasticity is included, this scenario corresponds a a a a a a
111 111 00 T ea m w i n s T ea m w i n s T ea m w i n s T ea m w i n s Stalemate Stalemate Stalemate CompetitiveCompetitiveCompetitive a + a = a = ( − a ) / c a = ( − a ) / c a = ( − a ) / c a = ( − a ) / c ( a ) c = 2/3 ( b ) c = 1 ( c ) c = 3/2 Stalemate Stalemate T ea m w i n s T ea m w i n s T ea m w i n s T ea m w i n s a = ( − a ) / c a = ( − a ) / c a = ( − a ) / c a = ( − a ) / c a a
11 00 a a CompetitiveCompetitive ( a ) c = 2/3 ( b ) c = 3/2 FIG. 5. Deterministic game outcomes based on differentstrategies ( a , a ) for (a) c <
1, (b) c > to long games). When both teams preferentially hit play-ers from the other team (i.e., bot a and a are close to1) a winner emerges quickly. When teams have oppositestrategies, one of the teams can quickly win, dependingon the value of c .While the rate equation description provides interest-ing insights, it relies on the assumption of an infinitenumber of players. Because of this, some of its predic-tions are not reasonable for games with a finite numberof players. For example, it predicts that the outcomeof games is completely determined by parameters andinitial conditions. In reality, games are determined bythe aggregate behavior of a finite number of individualplayers, and chance can play an important role. In thenext section we will model dodgeball games by consider-ing the stochastic behavior of individual players, and wewill find that the insights provided by the rate equationsare useful to understand the stochastic dodgeball games. IV. STOCHASTIC DODGEBALL SIMULATIONS
In this Section we present numerical simulations ofdodgeball games using a stochastic agent-based modelthat corresponds to the simplified model used in Sec-tion III.In the stochastic version of the game, each teamstarts with N players in their respective court, X (0) = X (0) = N , and no players in Jail, Y (0) = Y (0) = 0.Players in Court 1 make stochastic transitions to Jail 1 atrate λX ( t ) F ( X , X ) k e X , and players in Jail 1 make X X Y Y cF X X X ( N − X )(1 − F ) cF X X X ( N − X )(1 − F ) Jail 1Jail 2 Court 1 Court 2
FIG. 6. Stochastic dodgeball game. Players make transitionsbetween the indicated compartments with the rates shownnext to the arrows. The game ends when either X = 0 or X = 0. FIG. 7. Simulations of games with the same constants asFig. 4. Trajectories ( X , X ) have stochastic fluctuations ontop of the deterministic flow of Fig 4. The “Stalemate” regime(top left) results in long, back-and-forth games. transitions to Court 1 at rate λX [1 − F ( X , X )] k j ( N − X ), where, as in Sec. III, F i ( X , X ) is the probabil-ity that a player in Court i will throw a ball towardsan enemy player in the opposite Court instead of try-ing to save a teammate from Jail, k e is the probabilityof hitting a single enemy player, and k j is the prob-ability that a player in Jail catches a ball thrown atthem. The rates of transition for players in Team 2are obtained by permuting the indices 1 and 2. Byusing the dimensionless time τ = λk j t , the rates oftransition per dimensionless time are cX X F ( X , X )and X ( N − X )[1 − F ( X , X )] for players to transi-tion from Court 1 to Jail 1 and from Jail 1 to Court1, respectively, where c = k e /k j . The compartmentalmodel corresponding to this process is shown schemati-cally in Fig. 6. The code used for simulating the agent-based dodgeball model and finding the probability thata team wins can be found on the GitHub repository( https://github.com/Dodgeball-code/Dodgeball ). A. Stochastic games
In Figure 7 we show the evolution of four dodgeballgames simulated as described above using the same pa-rameters as in Fig. 4. The plots show the trajecto-ries of ( X , X ) starting from initial conditions (50 , FIG. 8. Fraction of players in Courts 1 and 2 (solid lines)versus dimensionless time τ for a stochastic game simulationwith the same parameters as Fig 4 (top left), i.e., c = 1 / a = 1 / a = 3 /
4, and N = 50. In the “Stalemate” regime,the fraction of players fluctuates stochastically about the fixedpoint values x ∗ = x ∗ (dashed line). Note that, although the trajectories have significant fluc-tuations, they follow approximately the flow shown inFig. 4. In particular, for the parameters resulting in thestalemate scenario [i.e., a stable fixed point ( x ∗ , x ∗ ) ∈ (0 , × (0 , N x ∗ , N x ∗ ) (indicated with an arrow).In practice, these parameters result in extremely longgames that continue until a random fluctuation is largeenough to decrease X or X to zero. To further illus-trate this, Fig. 8 shows X ( t ) (blue) and X ( t ) (orange)as a function of t for the parameters in Fig. 4(a). Theevolution of this game resembles that of the games seenin Fig. 2, which suggests that those games were in theStalemate regime. In the degenerate case, Fig. 4(d), thegame trajectory has large fluctuations around the line X + X = N , which corresponds to the line of fixedpoints x ∗ + x ∗ = 1 of the deterministic system. We in-terpret this behavior as the trajectory diffusing underthe effect of the fluctuations along the marginally stableline X + X = N . Note that in the particular trajectoryshown, Team 1 wins even after at some point in time theyhad only one player in Court 1. In Fig. 7(c) the gameeventually results in a victory by Team 1, even thoughthe deterministic model predicts a victory by Team 2[see Fig. 4(c)], because stochastic fluctuations of thetrajectory ( X , X ) allow it to cross over to the basinof attraction of (1 , X , X ) trajectories. To account for this, we focuson how the probability P of winning a game depends onthe parameters. This probability can be calculated di-rectly from the outcomes of a large number of simulatedgames (the algorithm for simulating games is presented FIG. 9. (a) Probability that Team 1 wins a game P as afunction of a with c = 2 / a = 3 / N = 1 , , , , and 50 (blue, orange, yellow, purple, and green solid lines,respectively). The dashed red lines mark bifurcations in thedeterministic dynamics (see text), and the dashed horizontalline indicates P = 1 /
2. The leftmost region corresponds tothe “Stalemate” regime leading to long games. The middleregion represents “Team 1 Wins”, which can be noted by thelarge values of P for large values of N . The right region isthe “Competitive” region in the deterministic model noted bymixed values of P and quicker games. (b) Average durationof games (in dimensionless time τ ) with the same parametersas in the bottom panel. The duration of games in the “Stale-mate” regime increases with N . The shaded area around thegreen curve represents 3 standard deviations. in Appendix A 1, but it is much more efficiently calcu-lated by using the properties of the underlying Markovprocess, as explained in Appendix A 2. To illustrate howthe probability of winning can be related to the deter-ministic results, we fix c = 2 / a = 3 /
4, and cal-culate P as a function of a . Fig 9(a) shows P as afunction of a for N = 1 , , , , and 50 (blue, orange,yellow, purple, and green solid lines, respectively). As a increases from 0 to 1, different regimes of the determin-istic model are traversed. For the parameters given let a s = (1 − a ) /c = 3 / a c = 1 − a c = 1 /
2, which areshown as dashed red lines. For 0 ≤ a < a s , the systemis in the “Stalemate” case, for a s < a < a c the system isthe “Team 1 Wins” case, and for a c < a <
1, it is in the“Competitive” case. Now we interpret how P changes as a is increased. For a < − a , the fixed point ( x ∗ , x ∗ )is closer to (0 ,
1) than it is to (1 , , P ∼ N since fluctuations are smaller. For1 − a < a < a s , the game is still in the stalemate regime,but now ( x ∗ , x ∗ ) is closer to (1 ,
0) and therefore P ∼ N . For a s < a < a c , the game isin the “Team 1 Wins” regime, and so P approaches 1rapidly as N increases. For a > a c , the game is in the“Competitive” regime, where the initial condition (1 , ,
0) for a < a and in thebasin of attraction of (0 ,
1) for a > a , which is reflectedby the fact that P > / a < a and P < / a < a . We note that for very small N (e.g., N = 1 , N = 1 (bluecurve), where the probability of winning can be calcu-lated explicitly as P = a / ( a + a ) = 4 a / (4 a + 3).According to our interpretation, victory in the “Stale-mate” regime is achieved by escaping the basin of attrac-tion of the underlying stable fixed point ( x ∗ , x ∗ ) via fluc-tuations induced by the finite number of players. Sincethese fluctuations become less important as the numberof players increases, one would expect that the averagetime τ to achieve victory would (i) be largest in the“Stalemate” regime, and (ii) increase with N . Fig 9(b)shows the average game duration τ as a function of a , calculated from direct simulation of 5000 stochasticgames when N <
50 and 100 games when N = 50. Con-sistent with the interpretation above, τ is much longer inthe “Stalemate” regime and increases with N [we havefound that τ scales exponentially with N (not shown), asone would expect for an escape problem driven by finitesize fluctuations]. Furthermore, it is maximum approx-imately when ( x ∗ , x ∗ ) is equidistant to (0 ,
1) and (1 , a = 1 − a [see Fig 9(a)].To get a broader picture of how the choice of fixedstrategies a , a affects the probability of winning, weshow in Fig. 10 the probability that Team 1 wins, P , asa function of a and a , obtained numerically as describedin Appendix A 2 for N = 20 and the same parameters ofFig. 5(a). The curve for N = 20 in Fig. 9(a) correspondsto the values shown in the dashed line. There appears tobe a saddle point approximately at ( a , a ) ≈ (1 / , / mean-field games [12, 13]. We leave a more detailed study of Nash equilib-ria in dodgeball for future study. B. Heuristic Strategy
In the example treated in the previous Sections, theprobability that a player in Team i decides to throw aball to an enemy player instead of rescuing a teammate FIG. 10. Probability that Team 1 wins P as a function of a and a . The dashed line corresponds to the N = 20 curve inFig. 9(a). from jail, F i ( X , X ) is fixed throughout the game at thevalue a i . In reality, players may adjust this probabilityin order to optimize the probability of winning. In thisSection we will develop a heuristic greedy strategy withthe goal of trying to optimize victory. For this purpose,it is useful to define the quantities H i as H = X X + X , H = X X + X . (12)These quantities have the advantage that they are nor-malized between 0 and 1, with H i = 0 ( H i = 1) corre-sponding to a loss (victory) by Team i . In addition, H i corresponds to the probability that team i will throw aball next, and therefore it is a good indicator of how muchcontrol team i has. Therefore, it is reasonable for Team i to apply a strategy to increase H i . To develop such astrategy, we define H i and H + i as the values of H i beforeand after a ball is thrown. Similarly, we define X i and X + i as the values of X i before and after a ball is thrown.For definiteness, we will present the strategy for Team 1,and the strategy for Team 2 will be similar. The basisof the strategy is to choose the value of F ( X , X ) thatmaximizes the expected value of H +1 , E [ H +1 ]. Since F isthe probability that the ball is thrown at enemy players, p e the probability that such a ball actually hits an enemyplayer, 1 − F the probability that the ball is thrown at ateammate in jail, and p j the probability that such a ballis successful in rescuing a teammate, the expected valueof H +1 is given by E [ H +1 ] = F (cid:20) X X + X − p e + X X + X (1 − p e ) (cid:21) + (1 − F ) (cid:20) X + 1 X + X + 1 p j + X X + X (1 − p j ) (cid:21) , (13)Which can be rewritten as E [ H +1 ] = A + BX + X F , (14) FIG. 11. Probability of Team 1 winning with the heuristicstrategy F against a fixed strategy a . Number of players ineach game is set to N = 20. where B = (cid:20) X t X t + X t − p e − X t X t + X t + 1 p j (cid:21) (15)and A is independent of F .Since Eq. (14) is linear in F , it is maximized by choos-ing F = 1 when B > F = 0 when B <
0. There-fore, the choice of F that maximizes the expected valueof H +1 , F ∗ , is F ∗ = (cid:40) , X X + X − p e ( X ) ≥ X X + X +1 p j ( N − X ) , , otherwise . (16)When X , X (cid:29)
1, the strategy simplifies to F ∗ ≈ (cid:40) , X p e ( X ) ≥ X p j ( N − X ) , , otherwise . (17)We note that this can also be derived by maximizing dH /dt by using Eqs. (3)-(4). Furthermore, for the caseconsidered in Sections III and IV, where p e ( X i ) = k e X i and p j ( Y i ) = k j Y i , the strategy reduces to F ∗ = (cid:40) , k e X ≥ k j ( N − X ) , , otherwise . (18)For example, when k e = k j (i.e., the probability of suc-cess in hitting an enemy player is the same as the prob-ability of succeeding in rescuing a teammate from jail)the strategy for Team 1 consists in trying always to res-cue teammates from Jail 1 when the majority of Team 1player’s are in Jail 1, and in trying to hit players fromTeam 2 when the majority of Team 1’s players are inCourt 1. Interestingly, in the limit X , X (cid:29) X .To validate the effectiveness of this strategy, we simu-late dodgeball games in which Team 1 adopts the strategy F ( X , X ) = F ∗ given by Eq. (16) and Team 2 uses thefixed strategy F ( X , X ) = a . In Fig. 11 we plot theprobability that Team 1 wins, P , as a function of a for c = 2 / , , / , and ∞ (blue, orange, yellow, and purplesolid lines, respectively). As the Figure shows, using theStrategy F ∗ consistently results in a probability of win-ning higher than 1 /
2. In general, the strategy F ∗ doesbest when c is small and N is large. Note the probabil-ity of Team 1 winning is 1 / c = ∞ , i.e., thechance of saving a player in jail is 0. In this case thestrategy a = 1 is clearly optimal. V. CONCLUSIONS
In this paper we presented a mathematical modelof dodgeball, which we analyzed via an ODE-basedcompartmental model and numerical simulations of astochastic agent-based model. These two complementarymethods of analysis revealed a rich dynamical landscape.Depending on Teams’ strategies, the dynamics and out-come of the game is determined by a combination of thestability of the fixed points of the underlying dynami-cal system and the stochastic fluctuations caused by therandom behavior of individual players. Additionally, wederived a greedy strategy in the context of the stochas-tic model of dodgeball. While our strategy was shownto be effective against fixed strategies (i.e., F = a ), itisn’t necessarily optimal. This suggests the future workof finding an optimal strategy as well as studying thetopic of Nash equilibriums in the context of dodgeball.More data is needed to verify some of the predictionsof the dodgeball model. While the time series from realgames shown in Fig. 2 appear to be consistent with theStalemate regime, a quantitative comparison would needestimation of the quantities k e , k j , a , and a . In princi-ple, these probabilities could be estimated from recordeddodgeball games. Nevertheless, the continuous model ofdodgeball is able to offer reasonable insights into the be-havior of stochastic agent-based games with a realisticnumber of players.Our model and analysis relied on various assumptionsand simplifications, and relaxing some of these assump-tions could be a useful topic for future work as well. Onesignificant assumption used is that a ball thrown at anenemy player will not be caught. However, it is possi-ble for balls to be caught, and this causes the thrower tobe sent to jail. The dodgeball model could be extendedto include this situation. Who a player decides to tar-get currently only depends on the number of remainingenemies in play and the number of people in jail, butthis could be generalized to account for heterogeneoustargeting probabilities. The last assumption that will bediscussed here is that this model assumes uniform be-havior of the players. Individual ability could be mod-eled by including an individual’s ability to catch balls,hit an enemy target, and hit shots on jail. Finally, weassumed that players behave independently (which is a reasonable approximation in Elementary School games).Coordinated strategies such as those used in professionalgames are not considered here. ACKNOWLEDGMENTS
We thank James Meiss, Nicholas Landry, Daniel Lar-remore, and Max Ruth for their useful comments. Wealso thank Eisenhower Elementary for allowing us to usethe data.
Appendix A
In this Appendix we provide details about the numeri-cal simulation of the stochastic dodgeball games, and thenumerical computation of winning probabilities P i .
1. Agent-based stochastic simulations
Here we describe the simulation of a single, stochasticagent-based dodgeball game. At t = 0, the game startswith N players on each team, X = X = N . Since λ is the rate at which players throw balls, and we assumethat players throw balls independently of each other, thenext ball throw in the game is exponentially distributedwith rate r = ( X + X ) λ. (A1)The probability that team i throws a ball next (beforethe other team), which we denote p i , is given by p = X X + X , (A2) p = X X + X . (A3)The pseudo-code for simulating a game is below. Recallthat F i ( X , X ) is the probability that team i throws aball towards the enemy instead of towards their jail, p e is the probability that a ball thrown towards the enemyhits a target, and p j is the probability that a ball throwntowards jail is successfully caught. In addition, we stopthe simulation if the number of throws k exceeds K max =50 N . Algorithm 1
Simulate dodgeball game At t = 0, set X = X = N and k = 0. while ( X > X >
0) and k ≤ K max do k ← k + 1 t ← t + Exponential random variable with mean 1 /r .Choose throwing team, 1 or 2, with probabilities p , p .Let the throwing team be i and the other team be j .Choose to throw ball at enemy or rescue from jailwith probabilities F i ( X , X ), 1 − F i ( X , X ). if Throw to enemy then X j ← X j − p e ( X j ) end ifif Throw to jail then X i ← X i + 1 with probability p j ( N − X i ) end ifend while2. Calculation of winning probabilities P i Here we explain how the probability that team i wins, P i , is calculated for a given set of parameters.First, we define as (cid:126)v k the column vector whose entriesare the probabilities that the game is in each of the ( N +1) possible states ( X , X ) after the k -th ball is thrown.Accordingly, (cid:126)v is the vector that represents the initialcondition ( N, N ). Then, we define M as the ( N + 1) × ( N + 1) matrix of transition probabilities between thesestates. Because the game is a Markov process, we have (cid:126)v k +1 = M(cid:126)v k . (A4) Now we let (cid:126)u i be a vector that is 1 in each state in whichTeam i wins and 0 otherwise. Then, P i = lim k →∞ (cid:126)v Tk (cid:126)u i (A5)In practice, we stop the iteration when | P + P − | = | (cid:126)v k · ( (cid:126)u + (cid:126)u ) − | < − , (A6)or k > K max = 50 N . When the game is in stalemate,the expected length of games grows exponentially with N , and the calculation above becomes impractical formoderate values of N . In this case, we instead evolve thevector (cid:126)v k in steps that are powers of two, as (cid:126)v j = M j (cid:126)v = ( M j − ) (cid:126)v , (A7)In practice we stop this iteration when j > J max = 256.The iteration described by Eq. A7 uses repeated non-sparse matrix multiplications, while Eq. A4 uses fastersparse matrix-vector products. However, since games canbe extremely long in the stalemate regime, the methoddescribed by Eq. A7 is still faster in that regime. Wechoose the values J max and K max such that in practiceEq. A4 and Eq. A7 take similar amounts of time in thestalemate regime. [1] J. M. Buld´u, J. Busquets, J. H. Mart´ınez, J. L. Herrera-Diestra, I. Echegoyen, J. Galeano, and J. Luque, Fron-tiers in psychology , 1900 (2018).[2] I. G. McHale and S. D. Relton, European Journal of Op-erational Research , 339 (2018).[3] J. H. Mart´ınez, D. Garrido, J. L. Herrera-Diestra, J. Bus-quets, R. Sevilla-Escoboza, and J. M. Buld´u, Entropy , 172 (2020).[4] R. Rein and D. Memmert, SpringerPlus , 1 (2016).[5] S. Merritt and A. Clauset, EPJ Data Science , 4 (2014).[6] A. Clauset, M. Kogan, and S. Redner, Physical ReviewE , 062815 (2015).[7] D. P. Kiley, A. J. Reagan, L. Mitchell, C. M. Danforth,and P. S. Dodds, Physical Review E , 052314 (2016).[8] P. Vraˇcar, E. ˇStrumbelj, and I. Kononenko, Expert Sys-tems with Applications , 58 (2016). [9] J. Wang, K. Zhao, D. Deng, A. Cao, X. Xie, Z. Zhou,H. Zhang, and Y. Wu, IEEE transactions on visualiza-tion and computer graphics , 407 (2019).[10] I. Palacios-Huerta, The Review of Economic Studies ,395 (2003).[11] M. Walker and J. Wooders, American Economic Review , 1521 (2001).[12] J.-M. Lasry and P.-L. Lions, Japanese journal of mathe-matics , 229 (2007).[13] A. Bensoussan, J. Frehse, P. Yam, et al. , Mean fieldgames and mean field type control theory , Vol. 101(Springer, 2013).[14] N. Gotelli,