Modeling Game Avatar Synergy and Opposition through Embedding in Multiplayer Online Battle Arena Games
Zhengxing Chen, Yuyu Xu, Truong-Huy D. Nguyen, Yizhou Sun, Magy Seif El-Nasr
MModeling Game Avatar Synergy and Opposition through Embedding inMultiplayer Online Battle Arena Games
Zhengxing Chen , Yuyu Xu , Truong-Huy D. Nguyen , Yizhou Sun and Magy Seif El-Nasr College of Information and Computer Science, Northeastern University Department of Computer and Information Science, Fordham University Department of Computer Science, University of California, Log Angeles [email protected], [email protected], [email protected] [email protected] [email protected] Abstract
Multiplayer Online Battle Arena (MOBA) games have re-ceived increasing worldwide popularity recently. In suchgames, players compete in teams against each other by con-trolling selected game avatars, each of which is designedwith different strengths and weaknesses. Intuitively, puttingtogether game avatars that complement each other ( synergy )and suppress those of opponents ( opposition ) would result ina stronger team. In-depth understanding of synergy and op-position relationships among game avatars benefits player inmaking decisions in game avatar drafting and gaining betterprediction of match events. However, due to intricate designand complex interactions between game avatars, thorough un-derstanding of their relationships is not a trivial task.In this paper, we propose a latent variable model, namely
Game Avatar Embedding (GAE), to learn avatars’ numeri-cal representations which encode synergy and opposition re-lationships between pairs of avatars. The merits of our modelare twofold: (1) the captured synergy and opposition relation-ships are sensible to experienced human players’ perception;(2) the learned numerical representations of game avatars al-low many important downstream tasks, such as similar avatarsearch, match outcome prediction, and avatar pick recom-mender. To our best knowledge, no previous model is ableto simultaneously support both features. Our quantitative andqualitative evaluations on real match data from three com-mercial MOBA games illustrate the benefits of our model.
Introduction
Multiplayer Online Battle Arena (MOBA) has been one ofthe most popular e-sports game types. For example,
Leagueof Legends (Riot Games), one of the most popular e-Sports,reportedly has 90 million accounts registered, 27 millionunique daily players, and 7.5 million concurrent users atpeak (Minotti 2016; Tassi 2016). In a MOBA game, 2 teams,composed of 5 player each, combat in a virtual environment.The goal is to beat the opposite team and destroy their base.Game avatars are often designed with a variety of attributes,skills, roles, etc., which is intended to provide players withchoices and options so that every player can find a charac-ter that fits their preferences. Moreover, it is customary foravatars in such games to possess strength in one aspect, butweakness in others. As such, in order to win a match, it is
Copyright c (cid:13) well-known that players need to not only control their owngame avatars well, but also need to select a game avatar that,together with other team members’ picks, forms a team thatenhances skills and complements weaknesses of each other( synergy ), while posing suppressing strengths over those inthe opponent team ( opposition ) . For example, in DOTA2 (Valve Corporation), avatar Clockwerk has high synergywith
Naix because Clockwerk can transport Naix to targetenemy directly, increasing the limited mobility of Naix tohit enemy. Naix also delivers large damage to complementthe limited attack by Clockwerk, making them an efficientfighting duo. In another example, avatar
Anti-Mage ’s manaburn skill reduces an opponent’s mana resource, making hima natural opposition to
Medusa whose durable capability iscompletely relying on how much mana it has.Comprehensive understanding of synergy and oppositionrelationships between game avatar enhances player aware-ness and experience in games. First, it allows players tomake good decisions in drafting their team’s game avatarsto maximize the chance of winning. Second, it improves theprediction of the match’s progress and final outcomes, whichhelps players in preparing for strategies in advance. Lastly,it helps players discover other game avatars that match theirpersonal expertise or preferences. However, due to the intri-cate design and complex interactions among game avatars,thorough understanding of game avatars’ pros and cons andtheir relationships is not a trivial task to human players.In order to model game avatars’ synergy and oppositionrelationships, we propose a latent variable model, called
Game Avatar Embedding (GAE). GAE models game avatarsas vectors in a low-dimensional space learned. We hypoth-esized that the probability function of a match outcomeconstitutes of pairwise synergy and opposition interactionsformulated by game avatar vectors. Game avatar vectorsand other model parameters are learned by gradient descentthrough maximizing the likelihood function of all observedmatch outcomes. Latent variable models (LVM) and em-bedding techniques have been shown to successfully cap-ture characteristics of entities in texts (Mikolov et al. 2013), . weskimo . com/a-guide-to-drafting . html . reddit . com/r/learndota2/comments/3f9szo/how to counter pick heroes/ a r X i v : . [ c s . S I] M a r raphs (Maaten and Hinton 2008), and recommendation sys-tems (Koren, Bell, and Volinsky 2009). The advantages ofthese techniques include: 1. less manual feature engineer-ing required, 2. robust learning even when the data spar-sity problem is present, and 3. better reusability for down-stream tasks than pure predictive algorithms such as Gradi-ent Boosting Decision Tree (Friedman 2001) and Factoriza-tion Machine (Rendle 2010).Inheriting the advantages from LVM, GAE is the firstmodel to our best knowledge that not only captures syn-ergy and opposition relationships robustly, but also facil-itates many important downstream tasks which consumegame avatar vectors as input, e.g., similarity search ongame avatars, team composition analysis (Kim et al. 2016;Agarwala and Pearce 2014) and match outcome predic-tion (Yang, Qin, and Lei 2016).In sum, the contributions of our paper are multi-fold:1. we describe a novel Game Avatar Embedding (GAE)model which characterizes game avatars in vectors interms of synergy and opposition;2. we demonstrate the effectiveness of the model via quan-titative experiments on real data from three commercialMOBA games;3. we showcase how our model facilitates downstream taskssuch as similar avatar search and avatar pick recom-mendation, which off-the-shelf Machine Learning modelscannot accomplish.Our paper is structured as follows. We first present relatedwork, then introduce GAE models with details on modelspecification and learning process. Next, we report quanti-tative evaluation results of GAE, followed by two case stud-ies. Finally, we conclude our paper and discuss limitationsas well as future directions. Related Works
As we adopt an embedding model to uncover synergy andopposition relationships among game avatars from team-based outcomes for MOBA games, our work aligns with re-cent research in team formation analysis, MOBA game re-search, and embedding-based modeling methods.
Team Formation Analysis
Team formation analysis is the research topic which aimsto uncover factors that impact team performance. Team for-mation has already been studied in MOBA games (Pobied-ina et al. 2013a; Semenov et al. 2016). (Pobiedina et al.2013a) verified that team success depends on a successfulselected combination of game avatars. (Semenov et al. 2016)predicts MOBA outcomes by using 2-way factorization ma-chine (Rendle 2010), which can reliably estimate the levelsof pairwise relationships through factorized parameteriza-tion. However, their method does not naturally derive mean-ingful numerical representations of game avatars that can beanalyzed and utilized for many other downstream applica-tions.Although we focus on video game data in this paper,team formation analysis could help advance many other domains, such as social networks (Lappas, Liu, and Terzi2009; Anagnostopoulos et al. 2012), crowdsourcing (Rah-man et al. 2015; Kittur 2010; Roy et al. 2015), and robotics(Liemhetcharat and Veloso 2012). Existing works that alsouse machine learning to learn chacacterization of team mem-bers (Liemhetcharat and Veloso 2012; Rahman et al. 2015)are different than our work in that: (1) the dimensions ofteam member characterization are often pre-defined andfixed, such as a fixed set of skills, which requires manual ef-forts and domain knowledge; (2) no opposition relationshiphas been modeled.
MOBA Game Research
The rich design of MOBA games has attracted variety ofresearch to be conducted upon them. For example, teamformation analysis (Pobiedina et al. 2013a; Pobiedina etal. 2013b; Neidhardt, Huang, and Contractor 2015; Kimet al. 2016; Agarwala and Pearce 2014), skill decomposi-tion (Chen et al. 2016), match outcome prediction and avatarpick recommendation systems (Bhattacharya and Sabik ).They shed lights on real-world problems or facilitate build-ing adaptive player experience (Nguyen, Chen, and El-Nasr2015). Many of these tasks rely on processing vectors whichencode characteristics of game avatars. For example, in teamformation analysis, the calculation of team diversity is av-eraged pairwise cosine distances between game avatars’attribute vectors. Principal Component Analysis (Jolliffe2002) and t-SNE (Maaten and Hinton 2008), two frequentlyused dimension reduction techniques in clustering and vi-sualization, are also based on entities’ vectors. Our GAEmodel induces the vectors of game avatars encoding theirsynergy and opposition relationships, which can facilitatemany downstream tasks that perform upon vectors.
LVMs and Embedding Models
LVMs/embedding models have been long studied in NaturalLanguage Processing (NLP) (Mikolov et al. 2013), graph(Maaten and Hinton 2008), and recommendation system(Koren, Bell, and Volinsky 2009). In this family of models,entities are associated with vectors in a shared, continuouslow-dimensional space which encode entities’ characteris-tics efficiently and effectively. We will use vectors and em-bedding interchangeably to refer to the numerical represen-tations of entities.Some salient advantages of embedding models/LVMs areas follows: 1. it requires little human labor for feature engi-neering because entity vectors can be learned based on labelsor what are observed explicitly (e.g., links between nodesin a graph, user-item matrix, word sequences). 2. informa-tion between entities can be shared more effectively dur-ing the learning phase. For example, similar embeddings oftwo similar words can be learned if they are often used andoccur in the similar contexts, even though they do not ap-pear together. 3. learned vector representation can be reusedby many kinds of applications, such as sentiment analysis(Maas et al. 2011) and data visualization (Maaten and Hin-ton 2008).In our paper, game avatars are embedded as low-dimensional vectors. Their values are learned (supervised)hrough maximizing the winning probabilities (defined inSection 3) of all observed match outcomes. We will showin
Performance Evaluation and
Case Study that the learnedgame avatar embeddings indeed capture sensible team-related characteristics and allow for other downstream ap-plications, such as similar avatar search and avatar pickrecommendation. This cannot be achieved by previousmethods such as Logistic Regression, Factorization Ma-chine (Semenov et al. 2016) and Gradient Boosting Deci-sion Tree (Friedman 2001) which simply predict match out-comes without a means to derive game avatar embeddingsto be reused in other tasks.
Preliminary and Problem Definition
Suppose the training data is a match set M = { M , M , · · · , M Z } with Z matches. There are N uniquegame avatars appearing in total, denoted by A = { A , A , · · · , A N } . We assume each match is competed be-tween two teams, the red and the blue team. We use T z,r = { A i } and T z,b = { A j } to denote the sets of game avatars inthe red team and the blue team in M z , respectively. Since weare studying 5-vs-5 MOBA games, we have for ∀ z , |T z,r | =5 and |T z,b | = 5 . We use T = { ( T z,r , T z,b ) | z = 1 , · · · , Z } to denote all game avatar line-ups of M .Match outcomes are marked as O = { o , o , · · · , o Z } . o z = 1 means the red team wins over the blue team in M z otherwise o z = 0 . We use p ( o z = 1) and p ( o z = 0) todenote the winning probability from the view of the red teamand the blue team, respectively. Hence, p ( o z = 0) = 1 − p ( o z = 1) . Game Avatar Embedding Model
In this section, we will describe the proposed model, thelearning process, as well as discuss its relationships withFactorization Machines, a related model.
Model Synergy and Opposition
Inspired by embedding methods which have managed tolearn low-dimensional vectors to capture abundant attributesof entities, we propose to map characteristics of gameavatars into a low-dimensional latent space. For a gameavatar A i , its latent feature vector is denoted as a i ∈ R K . A ∈ R N × K is the latent feature matrix such that A = { a i } .We choose to use a bilinear model to model synergy andopposition relationships between pairs of avatars. The bilin-ear model allows us to separately learn game avatar embed-dings, as well as the matrices that determine the extents ofsynergy and opposition across different dimensions of gameavatar embeddings.First, we introduce the intra-team synergy score function S ( i, j ) , which calculates the level of synergy to which A i exerts on A j in the same team: S ( i, j ) = a Ti · P · a j = K (cid:88) m =1 K (cid:88) n =1 a im · p mn · a jn (1) P ∈ R K × K is named synergy matrix . There are two waysto understand P intuitively: 1. one can think of a Ti · P = a (cid:48) i as converting A i ’s embed-ding into the dimensions that A j looks for as a helpfulteammate. Then, the higher the dot product is between a (cid:48) i and a j , the higher synergy the two game avatars can build.2. alternatively, one can think that p mn measures how much m -th dimension of a i fits n -th dimension of a j in termsof intra-team interaction.Second, we define the inter-team opposition score func-tion C ( i, j ) , which quantifies the extent to which A i coun-ters A j in the opposite team: C ( i, j ) = a Ti · Q · a j = K (cid:88) m =1 K (cid:88) n =1 a im · q mn · a jn (2) Q ∈ R K × K is named opposition matrix . In a similar wayto understand P , q mn measures the influence on A i coun-tering A j , given their embeddings’ interaction on m -th and n -th dimension respectively.Note that P and Q are not necessarily symmetric, as thelevel of opposition in which A i suppress A j could be differ-ent from that of A j on A i .In this model, we only capture pairwise relationships be-cause they are much more prevalent. We also find advancedmodels such as Gradient Boosting Decision Trees (Friedman2001) potentially considering more intricate relationships donot improve the match outcome prediction task on all thethree datasets we study (See Section Performance Evalu-ation ). Still, it is possible to extend GAE for higher orderinteractions by modeling them using tensors and tensor op-erations (Kolda and Bader 2009). We will explore this aspectin the future.
Model Winning Probability
Next, we propose to model a winning outcome as the linearbreakdown of the individual biases, as well as their intra-team and inter-team interactions, of game avatars fromthe two teams involved.
Individual biases represent gameavatars’ intrinsic control difficulty that affects match out-comes, denoted as b = { b , b , · · · , b N } . Hence, the win-ning outcome p ( o z = 1) of a match M z is defined as fol-lows: p ( o z = 1) = σ (cid:0) (cid:88) i ∈T z,r b i − (cid:88) j ∈T z,b b j + (cid:88) i,j ∈T z,ri (cid:54) = j S ( i, j ) − (cid:88) i,j ∈T z,bi (cid:54) = j S ( i, j )+ (cid:88) i ∈T z,r (cid:88) j ∈T z,b C ( i, j ) − C ( j, i ) (cid:1) (3)where σ ( · ) is sigmoid function exp ( − x ) .The input of the sigmoid function is the sum of the dif-ferences of: (1) individual biases towards winning, (2) syn-ergy strength inside the team, and (3) opposition intensityagainst the opponent team. The latter two differences dependon traversing all valid pairs of game avatars within the sameteam or across the two teams. The larger the differences are,he closer p ( o z = 1) is to 1 meaning the advantageous teamis more likely to win.Note that in our formulation, players’ individual skill lev-els are not accounted for in the winning outcome’s probabil-ity. This is reasonable, since the data we collected is fromhighly selective ranked matches. Most commercial MOBAgames have proprietary matchmaking systems to ensure thatonly sufficiently experienced players with similar skill lev-els are allowed to compete in ranked matches (the type ofmatches we study). Therefore, the chance of results beingskewed by data from incompetent players is low. Objective Function and Learning
Assuming that each match is independent, the overall likeli-hood function is: p ( O , T | A , P , Q , b ) = Z (cid:89) z =1 p ( o z = 1) o z p ( o z = 0) − o z (4)The objective function is to minimize the negative loglikelihood w.r.t Θ = { A , P , Q , b } : J ( Θ ) = − Z Z (cid:88) z =1 (cid:0) o z log p ( o z = 1) + (1 − o z ) log p ( o z = 0) (cid:1) (5)For parameter learning, we use AdaGrad (Duchi, Hazan,and Singer 2011) to update parameters based on a smallbatch of matches in each iteration. Relation to Factorization Machine Model
GAE has a close relationship with 2-way factorization ma-chine (FM) (Rendle 2010), which has been applied in (Se-menov et al. 2016) to predict match outcomes of the samekind of games. In (Semenov et al. 2016), for a match M z ,the feature vector x z ∈ { , } N is a binary vector indi-cating which five avatars appear in the red and blue teamrespectively: x zi = , if i ≤ N and avatar i was in the red teamor if i > N and avatar i − N was in the blue team , otherwise (6)and FM models a winning probability by additionally ex-ploring pairwise interactions between non-zero features: p ( o z = 1) = σ (cid:0) (cid:88) i ∈T z,r c i + (cid:88) j ∈T z,b c j + N + (cid:88) i,j ∈T z,ri
We evaluate the utility of GAE using datasets collected fromthree commercial MOBA games, namely Defense of the An-cients (DOTA2), Heroes of Newerth (HoN), and Heroes ofthe Storms (HotS). All data is from 5-vs-5 matches that pitten random players in two teams against each other. No ma-jor game update affecting the mechanics of the games oc-curred during the data collection phase. All three gamesemploy matchmaking systems that select only players ofsimilar skill levels when assembling a match. All the threedatasets have roughly balanced winning outcomes for boththe red and blue teams. Statistics of the three datasets isshown in Table 1.The HotS match data was downloaded from a third-partygame log website ; all the matches happened during the lastmonth of 2016. The HoN dataset was collected by (Suznje-vic, Matijasevic, and Konfic 2015), which contains matchesplayed between December 20, 2014 from April 29, 2015.Finally, for DOTA 2, we use the original data set col-lected between February 11, 2016 to March 2, 2016 by Se-menov et al. (Semenov et al. 2016), and extract a subset ofmatches played by gamers with similar skill levels (i.e., nor-mal level). Experiment Setup and Results
There are two experiments designed to assess the effective-ness of GAE. The first is conducted as a numerical evalua-tion in terms of outcome prediction, while the second eval-uates GAE’s interpretability using human experts, as com-pared to other state-of-the-art methods. . hotslogs . com/Info/API able 1: Statistics of datasetsHotS HoN DOTA2 (*) indi-cate where GAE outperforms with p -values < . .Models HotS HoN DOTA2LR 0.6095 * * * GBDT 0.6375 * * * FM 0.6440 0.6154 * Outcome Prediction Results
First, we evaluate the match outcome prediction perfor-mance of GAE against well-known baselines, including Lo-gistic Regression ( LR ) , Gradient Boosting Decision Trees( GBDT ) and 2-way Factorization Machine ( FM ) . Foreach game dataset, we adopt 10-fold cross-validation pro-cedure with train:validate:test ratio set to be 8:1:1. In eachfold, a model with different configurations of hyperparam-eters (e.g., regularization penalty, the number of trees, thedimension of latent space, etc.) is trained on the train datasetand the best hyperparameters is determined according tothe classification performance on the validation dataset. Theclassification performance of the model with the best hyper-parameters on the test dataset will be recorded as the finalmeasurement of its classification strength. For GAE , we useEqn. 3 to predict outcomes on test datasets. For baselinemodels, Eqn. 6 is used to construct feature vectors, sim-ilar to how it is done in previous works. The area underROC Curve (AUC) is used as the classification performancemeasurement. Ten test AUC are recorded during the 10-foldcross-validation for each model (LR, GBDT, FM and GAE)such that classification performance can be compared usingpaired t-test (with confidence level 0.001).Table 2 reports the classification performances of all mod-els in match outcome prediction. The paired t-tests showedthat GAE has significantly higher test AUC than other mod-els except GAE vs. FM in Hots and DOTA2.We observe that LR has the worst classification AUC inall three games. That is not surprising because LR does notmodel interactions between avatars. This verifies that theredo exist team synergy and opposition between game avatars.GBDT is a tree-based model that could handle interactionsamong more than two game avatars. However, it achievesstatistically worse results than GAE. This demonstrates: (1)the strength of embedding methods in effectively encodingmeaningful information of pairwise synergy and oppositionrelationships in a low-dimensional space; (2) much more http://scikit-learn . org/ https://github . com/dmlc/xgboost https://github . com/ibayer/fastFM Table 3: Pearson’s r between human ratings andGAE/baseline scores on 3 evaluation sets of pairs (boldfaceindicates p -values < . )Similarity Synergy OppositionWin-Ratio Matrix 0.5258 - -FM - GAE data might be needed for GBDT to fully capture more com-plicated relationships. When GAE and FM are tuned with aproper number of latent space dimensions K , they achievecomparable AUC in HotS and DOTA2. This verified ourexpectation in Section Relation to Factorization MachineModel that GAE and FM should have similar outcome pre-diction performance because they both rely on factorizationtechniques to quantify pairwise interactions. However, theexception is HoN where GAE is statistically significantlybetter than FM in HoN and GAE appears to have smaller im-provement over LR than in other games. We will investigatethe characteristics of HoN compared to other MOBA gamesin the future. Overall, GAE predicted match outcomes welland robustly.
Human Evaluation
Second, we would like to validate how sensible GAE resultsare as compared to the experts’, i.e., human players’, judg-ment. We ask human players to rate pairs of game avatars interms of similarity, synergy and opposition. Intuitively, if amodel’s scores are highly correlated with human ratings, weconclude that such model generates sensible results. Sincerecruitment of knowledgeable players is a relatively expen-sive task, we only evaluate on the DOTA2 dataset. Basedon a pilot test of three DOTA2 players, 60 pairs are selectedwhich have clear similarity, synergy and opposition relation-ships (20 pairs for each kind of relationship). For example,the 20 similarity evaluation pairs include both very similaras well as very different pairs of game avatars because eitherkind is expected to be evaluated consistently by subjects.When using GAE to evaluate the pairs, the similarity isdetermined by cosine similarity between the learned gameavatar embeddings. The synergy is determined by S ( i, j ) + S ( j, i ) and the opposition by the absolute value of C ( i, j ) − C ( j, i ) for any pair of game avatars A i and A j . Note thatbesides GAE, we are not aware of any approach that canhandle similarity, synergy, and opposition queries all in asingle model. FMs can naturally answer synergy and oppo-sition queries; more specifically, two avatars’ synergy andopposition levels can be obtained using the left hand side ofEqn. 8 and Eqn. 10 respectively. However, they are not de-signed for similarity search, so we created an ad-hoc base-line method to compute avatars’ similarity based on the co-sine similarity between the respective rows of a win-ratiomatrix W ∈ R N × N , constructed as: W i,j = ( A i , A j ) win ( A i , A j ) from the same team (11) i,j + N = A i wins over A j A i , A j from the opposite teams(12)To collect human ratings, we created a survey asking sub-jects to rate on a 5-point Likert scale the level of similar-ity, synergy or opposition on the 60 pairs, with 1 as “not atall” and 5 as “very much”, and asked ten similarly skillfulDOTA2 players to provide their ratings. We produce Pear-son’s r between human ratings and GAE/baseline scores onthe 20 pairs in each kind of relationship.We compared the correlations (using Pearson’s r ) be-tween human ratings and those by GAE and the baseline.Better correlation corresponds to more sensible results fromthe players’ perspective. As shown in Table 3, for similarityqueries, GAE’s results better correlate with human ratingsthan those by the baseline, suggesting that similarity searchbased on the learned embeddings by GAE are more sensi-ble. For synergy and opposition queries, both GAE and thebaseline correlate with human ratings with high Pearson’s r ( ≥ . ) with p -value < . , which indicates both methodsare sensible to human players. This can be explained by thesimilarity of FM and GAE’s approach in using the factoriza-tion/embedding technique to model pairwise interactions. Case Study
In this section, we would like to qualitatively demonstratethe quality and utility of GAE within the context of prac-tical applications. All analyses are done with the help ofthree seasoned DOTA2 players, conducted on GAE’s results( K = 75 ) as learned from DOTA2 data. Game avatars arecalled heroes in DOTA 2. Application - Similarity Search
One direct downstream application utilizing GAE’s gameavatar vectors is similarity search. It can help players, bothstarters or pros, to expand their hero pools by recommend-ing heroes similar to what they are already familiar with orgood at. For example, given input hero
Clinkz , the top threeheroes GAE returns are
Weaver , Riki and
Mirana . After ex-amining the results, the three seasoned DOTA2 players allagreed that the top three heroes are very similar to Clinkz,as they are all
Agility Carry heroes with low hit points, greatescape capability, and sharing a stealthy play style.
Application - Personalized Recommendation
Kim et al. (Kim et al. 2016) suggested that the ideal gameavatar to maximize the winning chance fit players’ personalexpertise and team congruency in parallel, guided by whichGAE could be used for a personalized avatar pick recom-mendation system. We select a real match played by one ofour DOTA2 players for illustration although the real imple-mentation and verification of this idea requires more workin the future.In a ranked match, the player is the last to pick ahero, when his team have picked
Puck , Ember Spirit , Lion , Necrophos , and the opposite team picked
Silencer , Pudge , Sand King , Juggernaut , Anti-Mage . Given 30 seconds tomake the pick, he wants to prioritize the hero selection that synergizes with his team and opposes the other team. UsingEqn. 3 to search the hero that maximizes the winning prob-ability, GAE returns the top recommendation,
Ursa . How-ever, the player has not played Ursa before, thus are lessconfident about playing it. Based on the similarity search onthe learned game avatar embeddings, GAE returns a list ofheroes similar to Ursa, top 3 being
Troll Warlord , Sven and
Juggernaut . Finally, the player decides to go for Sven sincethat is the hero he is experienced with and Sven is also oneof the top 5 heroes identified with best overall synergy andopposition besides Ursa.Analyzing the above example, all the three DOTA2 play-ers strongly agreed that Ursa is a very suitable choice giventhat this player’s team lacks burst physical damage. In ad-dition, they had different extra interpretations on the Ursapick. For example, one player identified that Ursa could helpthe player finish the game early, disallowing the opposingteam to elongate the game when Anti-mage will show hismax advantage as a late game Carry . Another player recog-nized that Ursa could increase team fights capability sinceUrsa is a
Tank Carry hero who is durable in fights. All thethree players agreed that Sven is similar to Ursa, as bothheroes output high burst physical damage. They also pro-posed that GAE recommendation can be used differently ac-cording to personal prioritization. For example, some play-ers who strongly prefer skill familiarity can first use GAEto list their familiar heroes then run synergy and oppositionsearch.As a summary, GAE provides an interface to performqueries of similar, synergy and opposition simultaneously.These capabilities can then be incorporated into downstreamapplications, giving users a white-box tool to help them bet-ter understand the game and make better in-game choicesthat maximize the winning chance.
Conclusions, Limitations and Future Works
Modeling synergy and opposition relationships betweengame avatars is an important task that helps players under-stand the game and make better decisions in forming ef-fective teams. To tackle this task, our proposed embedding-based method models synergy and opposition relationshipswith game avatars encoded as vector representation. Ourquantitative and qualitative analyses show that GAE is ableto capture pairwise synergy and opposition relationshipsbetween game avatars that are sensible to human players.Moreover, the learned game avatar embeddings effectivelycapture important characteristics of game avatars becausesimilarity search based on the game avatar embeddings alsohighly correlate with human ratings. Our model opens newdoors to many downstream tasks, such as similarity searchon game avatars and personalized avatar recommendation.There are some future directions that we will pursue next.First, we want to study the extension of GAE in captur-ing higher-order relationships that involve more than twoavatars, as well as study the trade-offs between its perfor-mance improvement and computation overhead incurred.Second, we are currently limited to access to sensitive hu-man player information. In the future, we hope to collectatch data with richer player information and model playerand game avatar embeddings within the same model.
References [Agarwala and Pearce 2014] Agarwala, A., and Pearce, M.2014. Learning dota 2 team compositions. Technical report,tech. rep., Stanford University.[Anagnostopoulos et al. 2012] Anagnostopoulos, A.; Bec-chetti, L.; Castillo, C.; Gionis, A.; and Leonardi, S. 2012.Online team formation in social networks. In
Proceedings ofthe 21st international conference on World Wide Web , 839–848. ACM.[Bhattacharya and Sabik ] Bhattacharya, R., and Sabik, A.Data-driven recommendation systems for multiplayer onlinebattle arenas.[Chen et al. 2016] Chen, Z.; Sun, Y.; Seif El-Nasr, M.; andNguyen, T.-H. D. 2016. Player skill decomposition in mul-tiplayer online battle arenas. In
Meaningful Play .[Duchi, Hazan, and Singer 2011] Duchi, J.; Hazan, E.; andSinger, Y. 2011. Adaptive subgradient methods for onlinelearning and stochastic optimization.
Journal of MachineLearning Research
Annals of statis-tics
Principal component analy-sis . Wiley Online Library.[Kim et al. 2016] Kim, J.; Keegan, B. C.; Park, S.; and Oh,A. 2016. The proficiency-congruency dilemma: Virtualteam design and performance in multiplayer online games.In
Proceedings of the 2016 CHI Conference on Human Fac-tors in Computing Systems , 4351–4365. ACM.[Kittur 2010] Kittur, A. 2010. Crowdsourcing, collaborationand creativity.
ACM Crossroads
SIAM re-view
Computer (8):30–37.[Lappas, Liu, and Terzi 2009] Lappas, T.; Liu, K.; and Terzi,E. 2009. Finding a team of experts in social networks. In
Proceedings of the 15th ACM SIGKDD international con-ference on Knowledge discovery and data mining , 467–476.ACM.[Liemhetcharat and Veloso 2012] Liemhetcharat, S., andVeloso, M. 2012. Modeling and learning synergy for teamformation with heterogeneous agents. In
Proceedings ofthe 11th International Conference on Autonomous Agentsand Multiagent Systems-Volume 1 , 365–374. Interna-tional Foundation for Autonomous Agents and MultiagentSystems.[Maas et al. 2011] Maas, A. L.; Daly, R. E.; Pham, P. T.;Huang, D.; Ng, A. Y.; and Potts, C. 2011. Learning wordvectors for sentiment analysis. In
Proceedings of the 49th Annual Meeting of the Association for Computational Lin-guistics: Human Language Technologies-Volume 1 , 142–150. Association for Computational Linguistics.[Maaten and Hinton 2008] Maaten, L. v. d., and Hinton, G.2008. Visualizing data using t-sne.
Journal of MachineLearning Research
Advances in neural information processing systems , 3111–3119.[Minotti 2016] Minotti, M. 2016. Comparing MOBAs:League of Legends vs. Dota 2 vs. Smite vs. Heroes ofthe Storm. http://venturebeat . com/2015/07/15/comparing-mobas-league-of-legends-vs-dota-2-vs-smite-vs-heroes-of-the-storm/ . Online; accessed May, 2016.[Neidhardt, Huang, and Contractor 2015] Neidhardt, J.;Huang, Y.; and Contractor, N. 2015. Team vs. team:Success factors in a multiplayer online battle arena game.In Academy of Management Proceedings , volume 2015,18725. Academy of Management.[Nguyen, Chen, and El-Nasr 2015] Nguyen, T.-H. D.; Chen,Z.; and El-Nasr, M. S. 2015.
Analytics-based AI Techniquesfor Better Gaming Experience , volume 2 of
Game AI Pro .Boca Raton, Florida: CRC Press.[Pobiedina et al. 2013a] Pobiedina, N.; Neidhardt, J.; Cala-trava Moreno, M. d. C.; Grad-Gyenge, L.; and Werthner, H.2013a. On Successful Team Formation: Statistical Analysisof a Multiplayer Online Game. In , 55–62. IEEE.[Pobiedina et al. 2013b] Pobiedina, N.; Neidhardt, J.; Cala-trava Moreno, M. d. C.; and Werthner, H. 2013b. Rankingfactors of team success. In
Proceedings of the 22nd inter-national conference on World Wide Web companion , 1185–1194. International World Wide Web Conferences SteeringCommittee.[Rahman et al. 2015] Rahman, H.; Thirumuruganathan, S.;Roy, S. B.; Amer-Yahia, S.; and Das, G. 2015. Worker skillestimation in team-based tasks.
Proceedings of the VLDBEndowment , 995–1000. IEEE.[Roy et al. 2015] Roy, S. B.; Lykourentzou, I.; Thirumuru-ganathan, S.; Amer-Yahia, S.; and Das, G. 2015. Task as-signment optimization in knowledge-intensive crowdsourc-ing.
The VLDB Journal
Analysis of Images, Social Net-works and Texts . Springer. 26–37.[Suznjevic, Matijasevic, and Konfic 2015] Suznjevic, M.;Matijasevic, M.; and Konfic, J. 2015. Application contextbased algorithm for player skill evaluation in MOBA games.n
Network and Systems Support for Games (NetGames),2015 International Workshop on , 1–6. IEEE.[Tassi 2016] Tassi, P. 2016. Riot’s ’League of Leg-ends’ Reveals Astonishing 27 Million Daily Players,67 Million Monthly. . forbes . com/sites/insertcoin/2014/01/27/riots-league-of-legends-reveals-astonishing-27-million-daily-players-67-million-monthly/ . Online; accessed May,2016.[Yang, Qin, and Lei 2016] Yang, Y.; Qin, T.; and Lei, Y.-H.2016. Real-time esports match result prediction. arXivpreprint arXiv:1701.03162arXivpreprint arXiv:1701.03162