An open reproducible framework for the study of the iterated prisoner's dilemma
Vincent Knight, Owen Campbell, Marc Harper, Karol Langner, James Campbell, Thomas Campbell, Alex Carney, Martin Chorley, Cameron Davidson-Pilon, Kristian Glass, Nikoleta Glynatsi, Tomáš Ehrlich, Martin Jones, Georgios Koutsovoulos, Holly Tibble, Müller Jochen, Geraint Palmer, Piotr Petunov, Paul Slavin, Timothy Standen, Luis Visintini, Karl Molden
UUP JORS software Latex paper template version 0.1 (1) OverviewTitle
An open framework for the reproducible study of the iterated prisoner’s dilemma.
Authors
1. Knight, Vincent2. Campbell, Owen3. Harper, Marc4. Langner, Karol M.5. Campbell, James6. Campbell, Thomas7. Carney, Alex8. Chorley, Martin9. Davidson-Pilon, Cameron10. Glass, Kristian11. Glynatsi, Nikoleta 12. Ehrlich Tom´aˇs13. Jones, Martin14. Koutsovoulos, Georgios15. Tibble, Holly16. M¨uller, Jochen17. Palmer, Geraint18. Petunov, Piotr19. Slavin, Paul20. Standen, Timothy21. Visintini, Luis22. Molden, Karl
Paper Author Roles and Affiliations
1. Development; Cardiff University2. Development; Not affiliated3. Development; Not affiliated4. Development; Google Inc., Moun-tain View, CA5. Development; Cardiff University6. Development; St. Nicholas CatholicHigh School, Hartford7. Development; Cardiff University8. Development; Cardiff University9. Development; Not affiliated10. Development; Not affiliated11. Development; Cardiff University 12. Development; Not affiliated13. Development; Not affiliated14. Development; The University of Ed-inburgh15. Development; Not affiliated16. Development; Not affiliated17. Development; Cardiff University18. Development; Not affiliated19. Development; The University ofManchester20. Development; Cardiff University21. Development; Not affiliated22. Development; Not affiliated
Abstract
The Axelrod library is an open source Python package that allows for reproduciblegame theoretic research into the Iterated Prisoner’s Dilemma. This area of researchbegan in the 1980s but suffers from a lack of documentation and test code. The goalof the library is to provide such a resource, with facilities for the design of new strate-gies and interactions between them, as well as conducting tournaments and ecologicalsimulations for populations of strategies. a r X i v : . [ c s . G T ] D ec P JORS software Latex paper template version 0.1With a growing collection of 136 strategies, the library is a also a platform for anoriginal tournament that, in itself, is of interest to the game theoretic community.This paper describes the Iterated Prisoner’s Dilemma, the Axelrod library and itsdevelopment, and insights gained from some novel research.
Keywords
Game Theory; Prisoners Dilemma; Python
Introduction
Several Iterated Prisoner’s Dilemma tournaments have generated much interest; Ax-elrod’s original tournaments [2, 3], two 2004 anniversary tournaments [20], and theStewart and Plotkin 2012 tournament [44], following the discovery of zero-determinantstrategies. Subsequent research has spawned a number of papers (many of which arereferenced throughout this paper), but rarely are the results reproducible. Amongstwell-known tournaments, in only one case is the full original source code available(Axelrod’s second tournament [3], in FORTRAN). In no cases is the available codewell-documented, easily modifiable, or released with significant test suites.To complicate matters further, a new strategy is often studied in isolation with oppo-nents chosen by the creator of that strategy. Often such strategies are not sufficientlydescribed to enable reliable recreation (in the absence of source code), with [42] being anotable counter-example. In some cases, strategies are revised without updates to theirnames or published implementations [25, 26]. As such, the results cannot be reliablyreplicated and therefore have not met the basic scientific criterion of falsifiability.This paper introduces a software package: the Axelrod-Python library. The Axelrod-Python project has the following stated goals: • To enable the reproduction of Iterated Prisoner’s Dilemma research as easily aspossible • To produce the de-facto tool for any future Iterated Prisoner’s Dilemma research • To provide as simple a means as possible for anyone to define and contribute newand original Iterated Prisoner’s Dilemma strategiesThe presented library is partly motivated by an ongoing discussion in the academiccommunity about reproducible research [9, 16, 39, 40], and is: • Open: all code is released under an MIT license; • Reproducible and well-tested: at the time of writing there is an excellent level ofintegrated tests with 99.73% coverage (including property based tests: [28]) • Well-documented: all features of the library are documented for ease of use andmodification • Extensive: 135 strategies are included, with infinitely-many available in the caseof parametrised strategies • Extensible: easy to modify to include new strategies and to run new tournamentsP JORS software Latex paper template version 0.1
Review of the literature
As stated in [6]: “ few works in social science have had the general impact of [Axelrod’sstudy of the evolution of cooperation] ”. In 1980, Axelrod wrote two papers: [2, 3] whichdescribe a computer tournament that has been a major influence on subsequent gametheoretic work [5, 6, 7, 8, 10, 11, 12, 13, 15, 18, 23, 24, 27, 34, 35, 36, 38, 43, 44]. Asdescribed in [6] this work has not only had impact in mathematics but has also ledto insights in biology (for example in [43], a real tournament where Blue Jays are theparticipants is described) and in particular in the study of evolution.The tournament is based on an iterated game (see [29] or similar for details) where twoplayers repeatedly play the normal form game of (1) in full knowledge of each other’splaying history to date. An excellent description of the one shot game is given in [13]which is paraphrased below:Two players must choose between
Cooperate ( C ) and Defect ( D ): • If both choose C , they receive a payoff of R ( R eward); • If both choose D , they receive a payoff of P ( P unishment); • If one chooses C and the other D , the defector receives a payoff of T ( T emptation)and the cooperator a payoff of S ( S ucker).and the following reward matrix results from the Cartesian product of two decisionvectors (cid:104) C, D (cid:105) , (cid:18) R, R S, TT, S P, P (cid:19) such that
T > R > P > S and 2
R > T + S (1)The game of (1) is called the Prisoner’s Dilemma. Specific numerical values of ( R, S, T, P ) =(3 , , ,
1) are often used in the literature [2, 3], although any satisfying the conditionsin 1 will yield similar results. Axelrod’s tournaments (and further implementations ofthese) are sometimes referred to as Iterated Prisoner’s Dilemma (IPD) tournaments.An incomplete representative overview of published tournaments is given in Table 1.Year Reference Number of Strategies Type Source Code1979 [2] 13 Standard Not immediately available1979 [3] 64 Standard Available in FORTRAN1991 [6] 13 Noisy Not immediately available2002 [43] 16 Wildlife Not a computer based tournament2005 [20] 223 Varied Not available2012 [44] 13 Standard Not fully availableTable 1: An overview of a selection of published tournaments. Not all tournamentswere ‘standard’ round robins; for more details see the indicated references.In [34] a description is given of how incomplete information can be used to enhancecooperation, in a similar approach to the proof of the Folk theorem for repeated gamesP JORS software Latex paper template version 0.1[29]. This aspect of incomplete information is also considered in [6, 24, 35] where “noisy”tournaments randomly flip the choice made by a given strategy. In [36], incompleteinformation is considered in the sense of a probabilistic termination of each round ofthe tournament.As mentioned before, IPD tournaments have been studied in an evolutionary context:[12, 24, 38, 44] consider this in a traditional evolutionary game theory context. Theseworks investigate particular evolutionary contexts within which cooperation can evolveand persist. This can be in the context of direct interactions between strategies orpopulation dynamics for populations of many players using a variety of strategies,which can lead to very different results. For example, in [24] a machine learningalgorithm in a population context outperforms strategies described in [38] and [44]that are claimed to dominate any evolutionary opponent in head-to-head interactions.Further to these evolutionary ideas, [8, 10] are examples of using machine learningtechniques to evolve particular strategies. In [4], Axelrod describes how similar tech-niques are used to genetically evolve a high performing strategy from a given set ofstrategies. Note that in his original work, Axelrod only used a base strategy set of 12strategies for this evolutionary study. This is noteworthy as the library now boastsover 136 strategies that are readily available for a similar analysis.
Implementation and architectureDescription of the Axelrod Python package
The library is written in Python ( ) which is a popular lan-guage in the academic community with libraries developed for a variety of uses includ-ing: • Algorithmic Game Theory [30] ( http://gambit.sourceforge.net/ )). • Astrophysics [1] ( ); • Data manipulation [33] ( http://pandas.pydata.org/ ); • Machine learning [37] ( http://scikit-learn.org/ ); • Mathematics [46] ( ); • Visualisation [17] ( http://matplotlib.org/ );Furthermore, in [18] Python is described as an appropriate language for the reproduc-tion of Iterated Prisoner’s Dilemma tournaments due to its object oriented nature andreadability.The library itself is available at https://github.com/Axelrod-Python/Axelrod . Thisis a hosted git repository. Git is a version control system which is one of the recom-mended aspects of reproducible research [9, 40].As stated in the
Introduction , one of the main goals of the library is to allow for theeasy contribution of strategies. Doing this requires the writing of a simple Python class(which can inherit from other predefined classes). All components of the library areautomatically tested using a combination of unit, property and integration tests. Thesetests are run as new features are added to the library to ensure compatibility (they arealso run automatically using travis-ci.org ). When submitting a strategy, a simpletest is required which ensures the strategy behaves as expected. Full contributionP JORS software Latex paper template version 0.1guidelines can be found in the documentation, which is also part of the library itselfand is hosted using readthedocs.org . As an example, Figures 1 and 2 show the sourcecode for the Grudger strategy as well as its corresponding test. class Grudger(Player):"""A player starts by cooperating however will defect ifat any point the opponent has defected."""name = ’Grudger’classifier = {’memory_depth’: float(’inf’),
Figure 1: Source code for the Grudger strategy.You can see an overview of the structure of the source code in Figure 3. This shows theparallel collection of strategies and their tests. Furthermore the underlying engine forthe library is a class for tournaments which lives in the tournament.py module. Thisclass is responsible for coordinating the play of generated matches (from the match.py module). This generation of matches is the responsibility of a match generator class(in the match generator.py module) which is designed in such a way as to be easilymodifiable to create new types of tournaments. This is described further in a tutorialin the documentation which shows how to easily create a tournament where playersonly play each other with probability 0.5. This will be discussed further in the reusesection of this paper.To date the library has had contributions from 26 contributors from a variety of back-grounds which are not solely academic. These contributions have been mostly in termsof strategies. One strategy is the creation of an undergraduate mathematics studentwith little prior knowledge of programming. Multiple other strategies were written bya 15 year old secondary school student. Both of these students are authors of thispaper. As well as these strategy contributions, vital architectural improvements to theP JORS software Latex paper template version 0.1 class TestGrudger(TestPlayer):name = "Grudger"player = axelrod.Grudgerexpected_classifier = {’memory_depth’: float(’inf’),
Figure 2: Test code for the Grudger strategy.P JORS software Latex paper template version 0.1 tournament.py player.py strategies/Cooperator.py Defector.pyTitForTat.py ... ... tests/unit/TestCoop... TestDef...TestTitFor... ... travis.ci doc/ readthedocs.org match.pymatch_generatory.py
Figure 3: An overview of the source code.library itself have also been contributed. (2) AvailabilityOperating system
The Axelrod library runs on all major operating systems: Linux, Mac OS X andWindows.
Programming language
The library is continuously tested for compatibility with Python 2.7 and the two mostrecent python 3 releases.
Additional system requirements
There are no specific additional system requirements.
Support
Support is readily available in multiple forms: • An online chat channel: https://gitter.im/Axelrod-Python/Axelrod . • An email group: https://groups.google.com/forum/ . Dependencies
The following Python libraries are required dependencies:P JORS software Latex paper template version 0.1 • Numpy 1.9.2 • Matplotlib 1.4.2 (only a requirementif graphical output is required) • Tqdm 3.4.0 • Hypothesis 3.0 (only a requirementfor development)
List of contributors
The names of all the contributors are not known: as these were mainly done throughGithub and some have not provided their name or responded to a request for furtherdetails. Here is an incomplete list: • Owen Campbell • Marc Harper • Vincent Knight • Karol M. Langner • James Campbell • Thomas Campbell • Alex Carney • Martin Chorley • Cameron Davidson-Pilon • Kristian Glass • Nikoleta Glynatsi • Tom´aˇs Ehrlich • Martin Jones • Georgios Koutsovoulos • Holly Tibble • Jochen M¨uller • Geraint Palmer • Paul Slavin • Timothy Standen • Luis Visintini • Karl Molden • Jason Young • Andy Boot • Anna Barriscale
Software location:ArchiveName:
Zenodo
Persistent identifier:
Licence:
MIT
Publisher:
Vincent Knight
Version published:
Axelrod: 1.2.0
Date published:
Code repositoryName:
Github
Identifier: https://github.com/Axelrod-Python/Axelrod
Licence:
MIT
Date published:
Reuse potential
The Axelrod library has been designed with sustainable software practices in mind.There is an extensive documentation suite: axelrod.readthedocs.org/en/latest/ .Furthermore, there is a growing set of example Jupyter notebooks available here: https://github.com/Axelrod-Python/Axelrod-notebooks .The availability of a large number of strategies makes this tool an excellent and obviousexample of the benefits of open research which should positively impact the game theoryP JORS software Latex paper template version 0.1community. This is evidently true already as the library has been used to study andcreate interesting and powerful new strategies.Installation of the library is straightforward via standard python installation repos-itories ( https://pypi.python.org/pypi ). The package name is axelrod and canthus be installed by calling: pip install axelrod on all major operating systems(Windows, OS X and Linux).Figure 4 shows a very simple example of using the library to create a basic tournamentgiving the graphical output shown in Figure 5. >>> import axelrod>>> strategies = [s() for s in axelrod.demo_strategies]>>> tournament = axelrod.Tournament(strategies)>>> results = tournament.play()>>> plot = axelrod.Plot(results)>>> p = plot.boxplot()>>> p.show()
Figure 4: A simple set of commands to create a demonstration tournament. The outputis shown in Figure 5. D e f e c t o r G r udge r T i t F o r T a t C oope r a t o r R ando m : . Mean score per stage game over 200 turns repeated 10 times (5 strategies)
Figure 5: The results from a simple tournament.
New strategies, tournaments and implications
Due to the open nature of the library the number of strategies included has grown ata fast pace, as can be seen in Figure 6.P JORS software Latex paper template version 0.1 A p r J un A ug O c t D e c F eb A p r J un N u m be r o f s t r a t eg i e s Figure 6: The number of strategies included in the libraryNevertheless, due to previous research being done in an irreproducible manner with,for example, no source code and/or vaguely described strategies, not all previous tour-naments can yet be reproduced. In fact, some of the early tournaments might beimpossible to reproduce as the source code is apparently forever lost. This library aimsensure reproducibility in the future.One tournament that is possible to reproduce is that of [44]. The strategies used inthat tournament are the following:1. Cooperator2. Defector3. ZD-Extort-24. Joss: 0.95. Hard Tit For Tat6. Hard Tit For 2 Tats7. Tit For Tat8. Grudger9. Tit For 2 Tats10. Win-Stay Lose-Shift 11. Random: 0.512. ZD-GTFT-213. GTFT: 0.3314. Hard Prober15. Prober16. Prober 217. Prober 318. Calculator19. Hard Go By MajorityThis can be reproduced as shown in Figure 8, which gives the plot of Figure 7. Notethat slight differences with the results of [44] are due to stochastic behaviour of somestrategies.In parallel to the Python library, a tournament is being kept up to date that pitsall available strategies against each other. Figure 9 shows the results from the fulltournament which can also be seen (in full detail) here: http://axelrod-tournament.
P JORS software Latex paper template version 0.1 Z D - G TFT - G TFT : . T i t F o r T a t T i t F o r T a t s H a r d P r obe r H a r d T i t F o r T a t s W i n - S t a y Lo s e - S h i ft H a r d G o B y M a j o r i t y C oope r a t o r P r obe r G r udge r H a r d T i t F o r T a t R ando m : . C a l c u l a t o r P r obe r P r obe r J o ss : . Z D - E x t o r t - D e f e c t o r Mean score per stage game over 200 turns repeated 10 times (19 strategies)
Figure 7: The results from [44]. readthedocs.org/ . Data sets are also available showing the plays of every match thattakes place. Note that to recreate this tournament simply requires changing a singleline of the code shown in Figure 4, changing: >>> strategies = [s() for s in axelrod.demo_strategies]} to: >>> strategies = [s() for s in axelrod.ordinary_strategies]}.
The current winning strategy is new to the research literature: Looker Up. This is astrategy that maps a given set of states to actions. The state space is defined genericallyby m, n so as to map states to actions as shown in (2).((
C, D, D, D, C, D, D, C ) (cid:124) (cid:123)(cid:122) (cid:125) m first actions by opponent , n last pairs of actions (cid:122) (cid:125)(cid:124) (cid:123) (( C, C ) , ( C, C ))) → D (2)The example of (2) is an incomplete illustration of the mapping for m = 8 , n = 2. Intu-itively, this state space uses the initial plays of the opponent to gain some informationabout its intentions whilst still taking into account the recent play. The actual winningstrategy is an instance of the framework for m = n = 2 for which a particle swarmalgorithm has been used to train it. The second placed strategy was trained with anevolutionary algorithm [19, 22]. In [21] experiments are described that evaluate howthe second placed strategy behaves in environments other than those in which it wastrained and it continues to perform strongly.There are various other insights that have been gained from ongoing open research onthe library, details can be found in [14]. These include:P JORS software Latex paper template version 0.1 >>> import axelrod>>> strategies = [axelrod.Cooperator(),... axelrod.Defector(),... axelrod.ZDExtort2(),... axelrod.Joss(),... axelrod.HardTitForTat(),... axelrod.HardTitFor2Tats(),... axelrod.TitForTat(),... axelrod.Grudger(),... axelrod.TitFor2Tats(),... axelrod.WinStayLoseShift(),... axelrod.Random(),... axelrod.ZDGTFT2(),... axelrod.GTFT(),... axelrod.HardProber(),... axelrod.Prober(),... axelrod.Prober2(),... axelrod.Prober3(),... axelrod.Calculator(),... axelrod.HardGoByMajority()]>>> tournament = axelrod.Tournament(strategies)>>> results = tournament.play()>>> plot = axelrod.Plot(results)>>> p = plot.boxplot()>>> p.show() Figure 8: Source code for reproducing the tournament of [44]P JORS software Latex paper template version 0.1 PS O G a m b l e r E v o l v edLoo k e r U p D oub l e C r o ss e r F oo l M e O n c e B a ck S t abbe r G r adua l F o r ge tf u l F oo l M e O n c e O m ega TFT M e t a H un t e r N i c e A v e r age C op i e r G r udge r M e t a W i nne r Long M e m o r y M e t a W i nne r F i n i t e M e m o r y M e t a W i nne r M e m o r y O ne M e t a W i nne r S hub i k Z D - G TFT - i m i t ed R e t a li a t e ( . / ) S o ft G r udge r S o ft J o ss : . D a v i s G TFT : . E v en t ua l C yc l e H un t e r I n v e r s e O n c e B i tt en I n v e r s e P un i s he r L i m i t ed R e t a li a t e ( . / ) F o r g i v e r Z D - G E N - R e t a li a t e ( . ) L i m i t ed R e t a li a t e ( . / ) R e t a li a t e ( . ) E a t he r l e y M a t h C on s t an t H un t e r F o r ge tf u l G r udge r R e t a li a t e ( . ) C ha m p i on A dapa t i v e P a v l o v S o ft G o B y M a j o r i t y : S o ft G o B y M a j o r i t y : G r o f m an A ppea s e r S o ft G o B y M a j o r i t y : W i n - S t a y Lo s e - S h i ft A dapa t i v e P a v l o v S o ft G o B y M a j o r i t y : F i r m B u t F a i r F o r g i v i ng T i t F o r T a t G r u m p y M e t a M a j o r i t y Long M e m o r y H a r d T i t F o r T a t s T i t F o r T a t M e t a M a j o r i t y F i n i t e M e m o r y S o ft G o B y M a j o r i t y P un i s he r M e t a M a j o r i t y M e m o r y O ne T i t F o r T a t s H a r d T i t F o r T a t M e t a M a j o r i t y T w o T i t s F o r T a t T hu m pe r A dap t i v e N y degge r R ando m H un t e r H a r d P r obe r C yc l e H un t e r A v e r age C op i e r R a i de r D e f e c t o r H un t e r A l t e r na t o r H un t e r S nea ky T i t F o r T a t F oo l M e F o r e v e r C oope r a t o r P r eda t o r P r obe r F o r t r e ss T e s t e r A LL C o r A LL D P r obe r F o r t r e ss R i sky Q Lea r ne r H e s i t an t Q Lea r ne r A rr ogan t Q Lea r ne r C au t i ou s Q Lea r ne r P r obe r F e l d T u ll o ck S t o c ha s t i c W S L S R i po ff B u ll y T hue M o r s e I n v e r s e M e t a M i no r i t y Z D - SE T - S o l u t i on B M e t a M i x e r S u s p i c i ou s T i t F o r T a t T r i cky D e f e c t o r e π C oope r a t o r H un t e r C a l c u l a t o r R ando m : . φ O ppo s i t e G r udge r A l t e r na t o r A n t i T i t F o r T a t J o ss : . N a i v e P r obe r : . T hue M o r s e Z D - E x t o r t - W i n - S h i ft Lo s e - S t a y H a r d G o B y M a j o r i t y : Z D - E x t o r t - v A n t i C yc l e r Z D - E x t o r t - H a r d G o B y M a j o r i t y T r i cky C oope r a t o r C yc l e r CCD C yc l e r DC H and s ha k e H a r d G o B y M a j o r i t y : C yc l e r DDC S o l u t i on B H a r d G o B y M a j o r i t y : D e f e c t o r A gg r a v a t e r H a r d G o B y M a j o r i t y : C yc l e r CCCD C yc l e r CCCCCD
Mean score per stage game over 100 turns repeated 200 times (129 strategies)
Figure 9: Results from the library tournament (2016-06-13) • A closer look at zero determinant strategies, showing that extortionate strategiesobtain a large number of wins: the number of times they outscore an opponentduring a given match.
However these do not perform particularly well from theoverall tournament ranking point of view. This is relevant given the findings of[44] in which zero determinant strategies are shown to be able to perform betterthan any other strategy. This finding extends to noisy tournaments (which arealso implemented in the library). • This negative relationship between wins and performance does not generalise.There are some strategies that perform well, both in terms of matches won andoverall performance: Back stabber, Double crosser, Looker Up, and Fool MeOnce. These strategies continue to perform well in noisy tournaments, how-ever some of these have knowledge of the length of the game (Back stabber andDouble crosser). This is not necessary to rank well in both wins and score asdemonstrated by Looker Up and Fool Me Once. • Strategies like Looker Up and Meta Hunter seem to be generally cooperative yetstill exploit naive strategies. The Meta Hunter strategy is a particular type ofMeta strategy which uses a variety of other strategy behaviours to choose a bestaction. These strategies perform very well in general and continue to do so innoisy tournaments.
Conclusion
This paper has presented a game theoretic software package that aims to addressreproducibility of research into the Iterated Prisoner’s Dilemma. The open nature ofthe development of the library has lead rapidly to the inclusion of many well knownstrategies, many novel strategies, and new and recapitulated insights.The capabilities of the library mentioned above are not at all comprehensive, a list ofthe current abilities include:
EFERENCES
UP JORS software Latex paper template version 0.1 • Noisy tournaments. • Tournaments with probabilistic ending of interactions. • Ecological analysis of tournaments. • Moran processes. • Morality metrics based on [41]. • Transformation of strategies (in effect giving an infinite number of strategies). • Classification of strategies according to multiple dimensions. • Gathering of full interaction history for all interactions. • Parallelization of computations for tournaments with a high computational cost.These capabilities are constantly being updated.
Acknowledgements
The authors would like to thank all contributors. Also, they thank Robert Axelrodhimself for his well wishes with the library.
Competing interests
The authors declare that they have no competing interests.
References [1] Astropy Collaboration et al. “Astropy: A community Python package for as-tronomy”. In:
Astronomy and Astrophysics doi : . arXiv: .[2] R. Axelrod. “Effective Choice in the Prisoner’s Dilemma”. In: Journal of ConflictResolution
Journal ofConflict Resolution issn : 0022-0027. doi : .[4] R. Axelrod. The Evolution of Cooperation .[5] J. S. Banks and R. K. Sundaram. “Repeated games, finite automata, and com-plexity”. In:
Games and Economic Behavior issn : 08998256. doi : .[6] J. Bendor, R. M. Kramer, and S. Stout. “When in doubt . . .: Cooperationin a noisy prisoner’s dilemma”. In: Journal of Conflict Resolution issn : 0022-0027. doi : .[7] R Boyd and J. P. Lorberbaum. “No pure strategy is evolutionarily stable inthe repeated Prisoner’s Dilemma game”. In: Nature
327 (1987), pp. 58–59. issn :0028-0836. doi : .[8] K Chellapilla and D. B. Fogel. “Evolution, neural networks, games, and intelli-gence”. In: Proceedings of the Ieee issn : 00189219. doi : Doi10.1109/5.784222 . EFERENCES
UP JORS software Latex paper template version 0.1[9] T. Crick et al. “”Share and Enjoy”: Publishing Useful and Usable Scientific Mod-els”. In: (2014). arXiv: .[10] F. David B. “Evolving Behaviors in the Iterated Prisoner’s Dilemma”. In:
Evol.Comput. issn : 1063-6560. doi : .[11] M. Doebeli and C. Hauert. “Models of cooperation based on the Prisoner’sDilemma and the Snowdrift game”. In: Ecology Letters issn : 1461023X. doi : .[12] G. Ellison. “Cooperation in the prisoner’s dilemma with anonymous randommatching”. In: Review of Economic Studies issn : 00346527. doi : .[13] N. Gotts, J. Polhill, and A. Law. “Agent-based simulation in the study of socialdilemmas”. In: Artificial Intelligence Review
19 (2003), pp. 3–92. issn : 0269-2821. doi : .[14] M. Harper. Marc Harper Codes . 2015.[15] C. Hilbe, M. a. Nowak, and A. Traulsen. “Adaptive Dynamics of Extortion andCompliance”. In:
PLoS ONE issn : 1932-6203. doi :
10 .1371/journal.pone.0077886 .[16] N. P. C. Hong et al. “Top Tips to Make Your Research Irreproducible”. In: (2015),pp. 5–6. arXiv: .[17] J. D. Hunter. “Matplotlib: A 2D graphics environment”. In:
Computing In Sci-ence & Engineering
Journal of Artificial Societies and Social Simulation issn :14607425.[19] M. Jones.
Evolving strategies for an Iterated Prisoner’s Dilemma tournament .2015.[20] G. Kendall, X. Yao, and S. Y. Chong.
The iterated prisoners’ dilemma: 20 yearson . World Scientific Publishing Co., Inc., 2007.[21] V. Knight. http://vknight.org/unpeudemath/gametheory/2015/11/28/Experimenting-with-a-high-performing-evolved-strategy-in-other-environments/ . 2015.[22] G. Koutsovoulos.
Optimising the LookerUp strategy for an Iterated Prisoner’sDilemma tournament . 2016.[23] D. Kraines and V. Kraines. “Pavlov and the prisoner’s dilemma”. In:
Theory andDecision issn : 00405833. doi : .[24] C. Lee, M. Harper, and D. Fryer. “The Art of War: Beyond Memory-one Strate-gies in Population Games”. In: Plos One issn : 1932-6203. doi : . EFERENCES
UP JORS software Latex paper template version 0.1[25] J. Li. “How to design a strategy to win an IPD tournament”. In:
The iteratedprisoners dilemma
20 (2007), pp. 89–104.[26] J. Li, P. Hingston, and G. Kendall. “Engineering design of strategies for winningiterated prisoner’s dilemma competitions”. In:
Computational Intelligence andAI in Games, IEEE Transactions on
Journal of Theoretical Biology
Hypothesis 3.0.3 . https://github.com/DRMacIver/hypothesis .2016.[29] M. Maschler, E. Solan, and S. Zamir. Game theory . Cambridge University Press,2013, p. 1003. isbn : 9781107005488. doi : http : / / dx . doi . org / 10 . 1017 /CBO9780511794216 .[30] R. Mckelvey et al. Gambit: Software tools for game theory . Tech. rep. 2006.[31] M. McKerns and M. Aivazis. pathos: a framework for heterogeneous computing . http://trac.mystic.cacr.caltech.edu/project/pathos . 2010.[32] M. M. Mckerns et al. “Building a Framework for Predictive Science”. In: Scipy(2011), pp. 1–12. arXiv: arXiv:1202.1056v1 .[33] W. McKinney. “Data Structures for Statistical Computing in Python”. In: Pro-ceedings of the 9th Python in Science Conference . Ed. by S. van der Walt andJ. Millman. 2010, pp. 51 –56.[34] P. Milgrom, J. Roberts, and R. Wilson. “Rational Cooperation in the FinitelyRepeated Prisoners’ Dilemma”. In:
Journal of Economic Theory
252 (1982),pp. 245–252.[35] P. Molander. “The optimal level of generosity in a selfish, uncertain environ-ment”. In:
The Journal of Conflict Resolution issn :0022-0027. doi : .[36] J. K. Murnighan et al. “Expecting Continued Play in Prisoner ’ s DilemmaGames”. In: 27.2 (1983), pp. 279–300.[37] F. Pedregosa et al. “Scikit-learn: Machine Learning in Python”. In: Journal ofMachine Learning Research
12 (2011), pp. 2825–2830.[38] W. H. Press and F. J. Dyson. “Iterated Prisoner’s Dilemma contains strate-gies that dominate any evolutionary opponent”. In:
Proceedings of the NationalAcademy of Sciences issn : 0027-8424. doi : .[39] A. Prli´c and J. B. Procter. “Ten Simple Rules for the Open Development ofScientific Software”. In: PLoS Computational Biology issn :1553-7358. doi : . EFERENCES
UP JORS software Latex paper template version 0.1[40] G. K. Sandve et al. “Ten Simple Rules for Reproducible Computational Re-search”. In:
PLoS Computational Biology issn : 1553734X. doi : .[41] T. Singer-Clark. “Morality Metrics On Iterated Prisoners Dilemma Players”. In:(2014).[42] W. Slany and W. Kienreich. “On some winning strategies for the iterated pris-oners dilemma”. In: The iterated prisoners dilemma (2007), pp. 171–204.[43] D. W. Stephens, C. M. McLinn, and J. R. Stevens. “Discounting and reciprocityin an Iterated Prisoner’s Dilemma.” In:
Science (New York, N.Y.) issn : 00368075. doi : .[44] A. J. Stewart and J. B. Plotkin. “Extortion and cooperation in the Prisoner’sDilemma”. In: Proceedings of the National Academy of Sciences issn : 0027-8424. doi : .[45] The Axelrod project developers. Axelrod: Appeaser Release . Mar. 2016. doi : .[46] The Sage Developers. Sage Mathematics Software (Version 7.0) .