Evolutionary Multi-objective Optimization of Real-Time Strategy Micro
EEvolutionary Multi-objective Optimization ofReal-Time Strategy Micro
Rahul Dubey, Joseph Ghantous, Sushil Louis, and Siming Liu
Computer Science and EngineeringUniversity of Nevada
Reno, [email protected], [email protected], [email protected], [email protected]
Abstract —We investigate an evolutionary multi-objective ap-proach to good micro for real-time strategy games. Good microhelps a player win skirmishes and is one of the keys to developingbetter real-time strategy game play. In prior work, the samemulti-objective approach of maximizing damage done whileminimizing damage received was used to evolve micro for agroup of ranged units versus a group of melee units. We extendthis work to consider groups composed from two types ofunits. Specifically, this paper uses evolutionary multi-objectiveoptimization to generate micro for one group composed fromboth ranged and melee units versus another group of ranged andmelee units. Our micro behavior representation uses influencemaps to represent enemy spatial information and potential fieldsgenerated from distance, health, and weapons cool down to guideunit movement. Experimental results indicate that our multi-objective approach leads to a Pareto front of diverse high-qualitymicro encapsulating multiple possible tactics. This range of microprovided by the Pareto front enables a human or AI player totrade-off among short term tactics that better suit the player’slonger term strategy - for example, choosing to minimize friendlyunit damage at the cost of only lightly damaging the enemyversus maximizing damage to the enemy units at the cost ofincreased damage to friendly units. We believe that our resultsindicate the usefulness of potential fields as a representation, andof evolutionary multi-objective optimization as an approach, forgenerating good micro.
Index Terms —NSGA-II, Influence Maps, Potential Fields,Game AI.
I. I
NTRODUCTION
Real-Time Strategy games provide difficult challenges forcomputational intelligence researchers seeking to build arti-ficially intelligent opponents and teammates for such games.In these games, players find and consume resources to buildan economy to build an army to defeat an opponent in aseries of skirmishes usually culminating in a large decisivebattle. Good RTS game play embodies near-optimal sequentialdecision making in an uncertain environment under resourceand time constraints against a deceptive, dynamic, and adaptiveopponent (when playing against good players). Researchershave thus begun focusing on real-time strategy games as a newfrontier for computational and artificial intelligence researchin games [1].RTS game play involves both long-term strategic planningand shorter term tactical and reactive actions. The long-termplanning and decision making, often called macromanage-ment, or just macro for short, can be contrasted with the quick but precise and careful control of game units in order tomaximize unit effectiveness on the battlefield. This short-termcontrol and decision making is often called micromanagement,or just micro and good micro can win skirmishes even whena player has fewer units. This paper focuses on evolving goodmicro for groups of units of different types.Although much diverse work has been done on generatinggood micro for RTS games, our work differs in two aspects.First, we use evolutionary multi-objective optimization totradeoff two objectives: damage done versus damage received.Second, we represent unit behavior using multiple potentialfields and an influence map whose parameters evolve togenerate micro for groups composed from two types of units.Potential fields of the form cx e where x can be distance,health, or weapons cooldown determine unit movement. Influ-ence maps that give high values to map locations with moreopponent units specify the location to move towards or toattack. This paper extends earlier work that used the samerepresentation and Evolutionary Multi-Objective Optimization(EMOO) approach in evolving micro for one type of meleeunit versus one type of ranged unit [2]. As in this earlier, weuse our own implementation of the NSGA-II algorithms byDeb [3].Our results indicate that we can evolve micro for a group ofranged and melee units versus a group of the same number andtypes of ranged and melee units. The evolved micro performswell against hand selected opponents under a variety of condi-tions. Without explicit representation, we see the emergence ofkiting behavior for the ranged units, rushing behavior for themelee units, and strong melee units screening for the relativelyweak ranged units. The pareto front of evolved solutionscontains a variety of tactics suitable for a variety of roles in thebroader strategic situation in a particular game. For example,the GA evolves micro that maximizes damage to opponentunits while also receiving significant damage, more balancedmicro that deals and receives approximately equal amountsof damage, as well as micro that deal little damage but alsoreceives little damage. In the broader picture, this enables ahuman or AI player to choose the appropriate micro for thecurrent strategic situation. For example, a player may choosemicro that prefers to reduce damage by harassing because itwill tend to draw away opponent units from the main forceor occupy existing opponent units at a distant location. We a r X i v : . [ c s . N E ] M a r elieve these results indicate the potential of a multi-objectiveapproach for evolving good micro and to the potential for apotential fields representation of tactical behavior.The remainder of this paper is organized as follows.Section II discusses related work in RTS AI research andcommon approaches to evolve the micro behavior of units.SectionIII describes our 3D simulation platform, FastEcslent.SectionIV introduces the pure potential fields and influencemaps that govern micro in simulated skirmishes. This sectionalso describes the NSGA-II algorithm used to evolve the microbehavior. SectionV presents results and discussion. Finally, thelast section draws conclusions from our results and discussesfuture work. II. R ELATED W ORK
RTS AI work is popular in both industry and academia.Industry RTS AI developers are more focused on entertainmentwhile academic RTS AI research focuses on learning or rea-soning techniques for winning. For example, Ballinger evolvedrobust build orders in WaterCraft [4]. Gmeiner proposed anevolutionary approach for generating optimal build orders [5].Kstler evolve strategies for producing units of one or moretypes or produce units as quickly as possible [6]. There is alsostrong research interest in producing effective group behavior(good micro) in skirmishes since good micro can often turn thetide in close battles. Liu used case-injected genetic algorithmto generate high quality micro [7]. Churchill presented afast search method based on alpha-beta considering duration(ABCD) algorithm for tactical battles in RTS games [8].Again, Liu investigated hill climbers and canonical GAs toevolve micro behaviors in RTS games showing that geneticalgorithms were generally better in finding robust, high per-formance micro [9]. Louis and Liu evolved effective microbehavior based on influence maps and potential fields in RTSgames [10]. Our paper extends the work in [10] and representsmicro based on influence maps and potential fields for spatialreasoning and unit movement.In physics, a potential field is usually a distance dependentvector field generated by a force. The concept of artificial po-tential field was first introduced by Khatib for robot navigationand later this concept was found useful in guiding movementin games [11]. An influence map structures the world into a 2Dor 3D grid and assigns a value to each grid element or cell. Liucompares two different micro representations and the resultindicate that even with less domain knowledge the potentialfields based representation can evolve a reliable, high qualitymicro in a three dimensional RTS game [12]. Schmitt usedan evolutionary competitive approach to evolve micro usingpotential fields based micro representation and results showsthat their approach can evolve complex units movement duringskirmish [13].Early work used influence maps for spatial reasoning toevolve a LagoonCraft RTS game player [14]. Sweetser pre-sented an agent which uses cellular automata and influencemaps for decision-making in 3D game environment calledEmerGEnt [15]. Bergsma proposed a game AI architecture which use influence maps for a turn based strategy game [16].Preuss investigated an evolutionary approach to improve unitmovement based on flocking and influence map in the RTSgame Glest [17]. Uriarte presented an approach to performkiting behavior using Influence Maps in multi-agent gameenvironment called Nova [18].Cooperation and coordination in multi-agent systems, wasthe focal point of many studies [19], [20], [21], [22].Reynolds early work explores an approach to simulate birdflocking by creating a distributed behavioral model that resultsin artificial agent behavior much like natural flocking [23].Similarly Chuang studied controlling large flocks of unmannedvehicles using pairwise potentials [24].Within the games community, Yannakakis [25] evolvedopponent behaviors while Doherty [26] evolved tactical teambehavior for teams of agents. Avery used an evolutionarycomputing algorithm to generate influence map parametersthat led to effective group tactics for teams of entities againsta fixed opponent [27], [28]. We define potential fields andinfluence maps in more detail later in the paper. This paperextends Liu [12] and Louis’ [2] work in dealing with microfor heterogeneous groups of units.To run our experiments we created a simulation modelsimilar to StarCraft called FastEcslent, our open source, 3D,modular, RTS game environment. The next section introducesthis simulation environment in more detail.III. S
IMULATION E NVIRONMENT
With the release of the StarCraft-II API, StarCraft: BroodWar API (BWAPI) and numerous tournament such as OpenReal-Time Strategy Game AI Competition, the Artificial In-telligence and Interactive Digital Entertainment StarCraft AICompetition, and the Computational Intelligence and GamesStarCraft RTS AI Competition, researchers have been moti-vated to explore diverse AI approaches in RTS games [29]. Inthis work, we ran our experiments in a game simulator calledFastEcslent, developed for evolutionary computing research ingames [30]. Unlike other available RTS-like engines, FastEc-slent enables 3D movement, and can run without graphics thusproviding simpler integration with evolutionary computingapproaches.We predefined a set of scenarios where each automatedplayer controls a group of units initially spawned in differentlocations on a map with no obstacles. The entities usedin FastEcslent reflect those in StarCraft, more specifically,Vultures and Zealots. A Vulture is a vulnerable unit withlow hit-points but high movement speed, a ranged weapon,and considered effective when outmaneuvering slower meleeunits. A Zealot is a melee unit with short attack range andlow movement speed but has high hit-points. Table I showsthe details of these properties for both Vultures and Zealotswhich are used in our experiments. Since our research focuseson micro behaviors in skirmishes, we disabled fog of warand enabled 3D movement by adding maximum (1000) andminimum (0) altitudes, as well as a climb rate constant, r c , of 2. Comparing to StarCraft, units move in 3D by setting ABLE IU
NIT P ROPERTIES D EFINED I N F AST E CSLENT
Property Vulture ZealotHit-points 80 160MaxSpeed 64 40MaxDamage 20 16*2Weapons Range 256 224Weapons Cooldown 1.1 1.24 a desired heading ( dh ), a desired altitude ( da ), and a desiredspeed ( ds ). Every time step, a unit tries to achieve the unitsdesired speed by changing its current speed ( s ) according tothe unit’s acceleration ( r s ). s = s ± r s δt (1)where r s is the units acceleration, δt is the simulation timestep, and ± depends on whether ds is greater than or lessthan s . Similarly, h = h ± r t δt (2)and a = a ± r c δt (3)Where h is heading, a is altitude, r t is turn rate, and r c isclimb rate. From speed, heading, and altitude, we compute3D unit velocity (vel) and position (pos) as follows: vel = ( s ∗ cos ( h ) , , s ∗ sin ( h )) pos = pos + vel ∗ δt pos .y = a Here, bold text indicates vector variables, the xz plane is thehorizontal plane, the y -coordinate is height, and we assumethe unit points along its heading.Given a simulation environment within which we can fightbattles between unit groups from two different sides, weneed an opponent to evolve against. We first describe ourrepresentation and then describe how we generate a goodopponent to evolve against within this representation.IV. M ETHODOLOGY
We create several game maps (or scenarios) with two typesof units on each side. When we run a fitness evaluation,a decoded chromosome controls our units as they move,using potential fields, towards a target location defined byan influence map. This game-simulation stops when all theunits on one side die or time runs out. The simulation tracksthe health of units and provides a multi-objective fitness(damage done, damage received) for this chromosome to driveevolution. The rest of this section, describes the scenarios,potential fields, and influence maps used in our work.Earlier work has shown that evolving (training) on a singlemap with fixed starting locations for all units did not resultin robust micro [10]. We therefore train our units over fivedifferent scenarios and measure the robustness of evolvedmicro on 50 unseen randomly generated scenarios. In this work, randomly generated scenarios means only that units startat different initial positions at the beginning of a fitness evalua-tion. Scenarios are constructed from ”clumps” and ”clouds” ofentities; defined by a center and a radius. All units in a clumpare distributed randomly within a sphere defined by this radius( for this paper). Units in a cloud are distributed randomlywithin units of the sphere boundary defined by the centerand radius (also ).We created two sides; player1 with Vultures and Zealotsand player2 with Vultures and Zealots. The trainingscenarios are as follows: (a) A clump of player1 versus aclump of player2, (b) A clump of player1 units surroundedby a cloud of player2 units, (c) A clump of player2 unitssurrounded by a cloud of player1 units, (d) A set of player1units within range of in all three dimension centered atthe origin and a set of player2 units within in all threedimension centered at , and (e) the same distributionsof units but with the players swapping their centers. Ourevaluation function ran each of these five scenarios for everychromosome during fitness evaluation and the value returnedby the simulation for each objective is averaged over thesescenarios. This results in evolving more reliable micro thatcan do well under different training scenarios.Once a scenario starts running, units have to come up witha target location to attack. An influence map determines thistarget location.
A. Influence Maps
A typical IM is a grid defining spatial information in agame world, with values assigned to each grid-cell by anIMFunction. These grid-cell value are computed by summingthe influence of all units within range, r of the cell. r ismeasured in number of cells. The IM not only considersunits positions in the game world but also includes the hit-points and weapon cool-down of each unit. The influence ofa unit at the cell occupied by the unit is computed as theweighted linear sum these factors. A unit’s influence thus startsas this weighted linear sum at the unit’s cell and decreaseswith distance from this cell by a factor: I f . The NSGAevolves these parameters and evolving units move towards thelowest IM grid-cell value [2] using potential fields to guideall movement. B. Potential Fields
We use potential fields to guide unit movement to the targetlocation provided by the IM. Once near the opponent, wewould like our units to maneuver well based on the location ofenemy units, their health, and the state of their weapons. Wethus define and use attractive and repulsive potential fieldsfor each of these factors. Since the fields for friendly unitsshould be different from the fields for enemy units, we usetwo such sets of potential fields. Finally, the target locationalso exerts an attractive potential. This results in a total of (attractive, repulsive) × (location, health, weapons state) × (friend, enemy) +1 (target) = 13 potential fields for guidingone type of unit’s movement against an enemy also composed ig. 1. Potential fields needed for groups composed from two types of units. of only one type of unit. We use the same techniques from [2]to convert the vector sum of these potential fields into adesired heading and desired speed and same ranges of valuefor potential field parameters.Once we move to micro for groups composed from twotypes of units, the number of potentials fields increases.Figure 1 shows the four sets of potential fields needed whendealing with groups composed from two types of units. Insteadof one set of potential fields for friends and one set of potentialfields for enemies, we will need two sets of potential fieldscorresponding to the two types of friendly units, and two setsof potentials fields for the two types of enemy units. Each typeof friendly unit can then respond differently to the two typesof friendly units and differently to both types of enemy units.Results show that these potential fields enable the evolutionof high performance micro.We can see that Equation 4 gives the total number of param-eters required to deal with n different types of units to a side. Atotal ( p ) attraction and repulsion potential fields are requiredfor one type of friend and enemy units with parameters perpotential field. As different types of units are added each side,few parameters are counted multiple time such as potentialfield generated by distance between two different types of unitseach side. Summation in Equation 4 subtracts extra countedparameters. q is a constant represents target attraction potentialand IM parameters. Thus for two different types of units eachside, total 106 number of parameters required.Number of parameters = ( q +2 ∗ p ∗ n ) ∗ n − (cid:88) i ∈ n ∗ ( i − (4)These parameters provide a target location and guide unitmovement towards the target. If enemy units come withinweapons range of a friendly unit, the friendly unit targets thenearest enemy unit. In our game simulation all entities canfire in any direction even while moving from one location toother. With a good set of parameters the units evolve effectivemicro that tries to maximize damage done to enemy unitswhile minimizing damage taken. Although some work has combined damage done anddamage received into one objective to be maximized, wekeep the objectives separated and use an evolutionary multi-objective optimization approach to evolve a diverse paretofront. Specifically, we use our implementation of the Fast Non-dominated Sorting Genetic Algorithm (NSGA-II) to evolvea pareto front of micro behaviors for heterogenous groupscomposed from two types of units. We try to maximize damagedone to enemy units while minimizing damage to friendlyunits. Assume that we normalize damage done and damagereceived to span the range [0 .. , Equation 5 describes ourmulti-objective optimization problem.Maximize (cid:20) (cid:88) enemies ( D e ) , (cid:88) friends (1 − D f ) (cid:21) (5)Here, D e represents damage done to enemy units and D f represents damage to friendly units. To minimize damage tofriendly units we subtract from the maximum damage possible, , to also turn the second objective into a maximizationobjective. This normalized, two-objective fitness function usedwithin our NSGA-II implementation then produces the resultsdescribed in our results section. C. Baseline Opponent AI
In order to produce high quality micro behavior, finding agood opponent to play against is crucial. Instead of handcodingan opponent, we use a two step approach to find a goodopponent. First, we generated 30 random chromosomes thatwe used as opponents and ran NSGA-II against each oneof them with population size of 20 for 30 generations. Thebest opponent is the one that does most damage to friendlyunits. We thus choose the opponent chromosome that does themost damage as the next opponent. We then run our NSGAagainst this chromosome and choose the most balanced, closestto (0.5, 0.5), resulting chromosome from the last generationpareto front as the next opponent. We repeat this process fivetimes (five steps).Figure 2 shows the performance of randomly gen-erated chromosomes against the balanced individual fromthe last generation for each of the five steps above. Theline maked BO i represents the pareto front of these random chromosomes against the best balanced individual inthe i th step. The x-axis represents damage done, while they-axis represents 1 - damage received. The point (1 , thenrepresents micro that destroys all enemy units and receives nodamage. (1 , is micro that does destroys all opponents butalso loses all friendly units. (0 , usually indicates fleeingbehavior, friendly units deal no damage and receive no dam-age. (0 , is bad, friendly units did no damage and receivedmaximal damage - micro to be avoided. From the figure, wecan see that the chromosomes did worst against BO4.Finally, to confirm that BO4 would make a good opponentto evolve against, we picked random chromosomes andcompared BO4 against antother individual from the step fourpareto front. comes from our population of multipliedby the generations we run. Figure 3 shows how these ig. 2. Pareto front of 1000 random chromosome against BO1 to BO5 two individuals fare against these new random chromosomes.Clearly these individuals perform worse against BO4 and wethus chose BO4 as our opponent in the experiments describedbelow. Fig. 3. Comparing the pareto front of 3750 random chromosome against goodbalanced and good fleeing micro
V. R
ESULTS AND D ISCUSSION
We use real-coded parameter with simulated binarycrossover (SBX) along with polynomial mutation. After ex-perimenting with different values, we settled on the following.Crossover and mutation distribution indexes were both set to . We used high probabilities of crossover ( . ) and mutation( . ) to drive diversity. A. Pareto Front Evolution of Final Experiment
We evolved micro for groups of vultures and zealotsversus an identical opponent also with vultures and zealots. We used a population size of for generations and reportresults over runs using a different random seed for each run.Figure 4 shows the evolution of the pareto front at intervals Fig. 4. Micro Evolution for Friendly Unit in Final Experiment of fifteen ( ) generations for one run of our parallelized-evaluation NSGA-II. Broadly speaking, the pareto front movestowards (1 , while maintaining representatives along thetradeoff curve for maximizing damage done and minimizingdamage received. We can see the maintainence of a diverse setof micro making a diverse set of tradeoffs between damagedone and received. These results provide evidence that we canevolve a diverse set of micro tactics that learns to performswell against an existing opponentTo test the effectiveness of our evolutionary multi-objectiveoptimization approach, we played a balanced individual and afleeing individual from the th generation pareto front against ∼ rahuld/Experiment/.Figure 6 plots the combined pareto front in the first gen-eration over all ten random seeds versus the combined paretofront in the last generation over the ten random seeds. That is,we first did a set union of the pareto fronts in the ten initialrandomly generated populations. The points in this union overall ten runs are displayed as purple + for the initial generation(generation ) points and as green × for the points in the finalgeneration (generation ). The figure then shows progress ig. 5. Comparing evolved micro against 3750 random chromosomesFig. 6. The initial and final generation pareto front over ten runs for evolvedmicro between the first and last generation over all ten runs. We cansee that the last generation pareto front produces micro on oneextreme on the left (0.02, 0.98) representing a strong tendencyto flee, to the other extreme on the right (1,0.25) denoting anaggressive attacking micro behavior. There are a number ofsolutions near the middle with balanced micro behavior.To further check the robustness of our evolved micro onthe last generation pareto front, we decided to select onebalanced, one fleeing, and on attacking example of microfrom this last generation and play against BO4 in a variety ofdifferent randomly generated scenarios. In these scenarios,we randomly varied the numbers of zealots and vultures, bothbetween − , and made sure that both sides had identicalunits. Figure 7 shows results, indicating that the evolvedattacking micro (green × s) comes in on the lower right,generally dealing damage while also receiving significant Fig. 7. Robustness of evolved micro on 50 random testing scenarios damage. On average over the scenarios, the attacking microleads to objective function values of (0 . , . , while thebalanced micro leads to an average of (0 . , . and thefleeing micro’s average fitness comes to (0 . , . ∼ rahuld/Experiment/video1 showshow vultures and zealots controlled by our evolvedattacking micro plays against and defeats vultures and zealots controlled by BO4. The attacking micro managesto destroy all ∼ rahuld/Experiment/video2 showsour attack micro controlled vultures and zealots playingagainst Vultures and Zealots controlled by BO4. This isan example of the type of effective kiting that evolves overtime. VI. C
ONCLUSION
This paper focused on extending research in multi-objectiveoptimization and potential fields based representation to evolvemicro for groups composed from heterogeneous (two) typesof units. We choose a group of ranged and melee unit toplay against a group of ranged and melee considering damagedone and damage received as two objective functions. Weuse an evolutionary multi-objective optimization approach thatmaximizes damage done and minimizes damage received totune influence map and potential field parameter values thatlead to winning skirmishes in our scenario.We can see the emergence of kiting and other complexbehavior as the poulation evolves. The multi-objective problemformulation the fast non-dominated sorting GA evolve paretofronts that produced a diverse range of micro behaviors. Thesesolutions not only beat the opponent that they played againstto determine fitness, but are robust to different numbers ofopponents and can beat an opponent even when outnumbered.lthough this work dealt with two unit types, we wouldlike to extend our work to multiple unit types and to reducethe need for a good opponent to evolve against. Since we hadto manually co-evolve the opponent in this paper, we planto investigate coevolutionary multi-objective approaches. Wewould like to use a multi-objective, co-evolutionary algorithmto co-evolve a range of micro that is robust against a range ofopposition micro. Second; we plan to work on the StarCrat -IIAPI to implement our approach and representation to evolvegood micro to test against human experts.R
EFERENCES[1] S. Ontaon, G. Synnaeve, A. Uriarte, F. Richoux, D. Churchill, andM. Preuss. A survey of real-time strategy game ai research and com-petition in starcraft.
IEEE Transactions on Computational Intelligenceand AI in games , 5(4):1–19, 2013.[2] S.J. Louis and S. Liu. Multi-objective evolution for 3d rts micro.
Neuraland Evolutionary Computing, arXiv:1803.02943 , 2018.[3] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan. A fast and elitistmultiobjective genetic algorithm: Nsga- ii.
IEEE Transactions onEvolutionary Computation , 6(2):182–197, 2002.[4] C. Ballinger and S. Louis. Learning robust build-orders from previousopponents with coevolution. in Proc. IEEE Conf. Comput. Intell. Games ,pages 1–8, 2014.[5] B. Gmeinerand G. Donnert and H. Kstler. Optimizing opening strategiesin a real-time strategy game by a multi-objective geneti algorithm. inResearch and Development in Intelligent Systems XXIX , pages 361–374,2012.[6] H. Kstler and B. Gmeiner. A multi-objective genetic algorithm for buildorder optimization in starcraft ii.
KI-Knstliche Intelligenz , 27(3):221–233, 2013.[7] S. Liu, S. J. Louis, and M. Nicolescu. Using cigar for finding effectivegroup behaviors in rts game.
IEEE Conference on ComputationalIntelligence in Games , pages 1–8, 2013.[8] D. Churchill, A. Saffidine, and M. Buro. Fast heuristic search forrts game combat scenarios.
Interactive Digital Entertainment Conf. inArtificial Intelligence , pages 112–117, 2012.[9] S. Liu, S. J. Louis, and M. Nicolescu. Comparing heuristic searchmethods for finding effective group behaviors in rts game.
IEEECongress on Evolutionary Computation , pages 1371–1378, 2013.[10] S. Liu, S. Louis, , and C. Ballinger. Evolving effective micro behaviorsin real-time strategy games.
IEEE Transactions on ComputationalIntelligence and AI in Games , 8(4):351–362, 2016.[11] O. Khatib. Real-time obstacle avoidance for manipulators and mobilerobots.
The international journal of robotics research , 5(1):90–98, 1986.[12] S. Liu and S. J. Louis. Comparing two representations for evolvingmicro in 3d rts games.
IEEE International Conference on Tools withArtificial Intelligence , pages 722–789, 2016.[13] J. Schmitt and H. Kostler. A multi-objective genetic algorithm for simu-lating optimal fights in starcraft ii.
IEEE Conference on ComputationalIntelligence and Games , 2016.[14] C. Miles, J. Quiroz R. Leigh, and S. Louis. Co-evolving influence maptree based strategy game players.
IEEE Symposium on ComputationalIntelligence and Games , pages 88–95, 2007.[15] P. Sweetser and J. Wiles. Combining influence maps and cellularautomata for reactive game agents.
Intelligent Data Engineering andAutomated Learning-IDEAL , pages 209–215, 2005.[16] M. Bergsma and P. Spronck. Adaptive spatial reasoning for turn-basedstrategy games.
Fourth Artificial Intelligence and Interactive DigitalEntertainment Conference , pages 161–166, 2008.[17] M. Preuss, N. Beume, H. Danielsiek, T. Hein, B. Naujoks, N. Piatkowski,R. Stuer, A. Thom, , and S. Wessing. Towards intelligent team compo-sition and maneuvering in real-time strategy games.
IEEE Transactionson Computational Intelligence and AI in Games , 2(2):82–98, 2010.[18] A. Uriarte and S. Ontan. Kiting in rts games using influence maps.
Eighth Artificial Intelligence and Interactive Digital Entertainment Con-ference , pages 31–36, 2012.[19] M. Barbuceanu and M. Fox. Cool: A language for describing coordi-nation in multi agent systems.
Proceedings of the First InternationalConference on Multi-Agent Systems , pages 17–24, 1995. [20] N. Jennings. Commitments and conventions: The foundation of coordi-nation in multi-agent systems.
Knowledge Engineering Review , 8:223–223, 1993.[21] N. Jennings. Controlling cooperative problem solving in industrial multi-agent systems using joint intentions.
Artificial intelligence , 75(2):195–240, 1995.[22] R. Olfati-Saber, J. Fax, and R. Murray. Consensus and cooperation innetworked multi-agent systems.
Proceedings of the IEEE , 65(1):215–233, 2007.[23] C. Reynolds. Flocks, herds and schools: A distributed behavioral model. in ACM Digital library Computer Graphics , 21(4):25–34, 1987.[24] Y. Chuang, Y. Huang, M. DOrsogna, and A. Bertozzi. Multi-vehicleflocking: scalability of cooperative control algorithms using pairwisepotentials.
IEEE International Conference on Robotics and Automation ,pages 2292–2299, 2007.[25] G. Yannakakis and J. Hallam. Evolving opponents for interestinginteractive computer games.
From Animals to Animats , 8:499–508, 2004.[26] D. Doherty and C. ORiordan. Evolving tactical behaviours for teamsof agents in single player action games. , pages 121–126, 2006.[27] P. Avery, S. Louis, and B. Avery. Evolving coordinated spatial tacticsfor autonomous entities using influence maps.
IEEE ComputationalIntelligence and Games , pages 341–348, 2009.[28] P. Avery and S. Louis. Coevolving team tactics for a real-time strategygame.
IEEE Congress on Evolutionary Computation , pages 1–8, 2010.[29] Michael Buro and David Churchill. Real-time strategy game competi-tions.