Designing a mobile game to generate player data -- lessons learned
DDesigning a mobile game to generate player data — lessons learned
William Wallis, William Kavanagh, Alice Miller & Tim Storer
Department of Computing Science,University of Glasgow,Scotlandemail: [email protected]
KEYWORDS
Design, Prototyping, Datasets, Mobile Game Design
ABSTRACT
User friendly tools have lowered the requirementsof high-quality game design to the point where re-searchers without development experience can releasetheir own games. However, there is no establishedbest-practice as few games have been produced forresearch purposes. Having developed a mobile gamewithout the guidance of similar projects, we realisedthe need to share our experience so future researchershave a path to follow. Research into game balancingand system simulation required an experimental casestudy, which inspired the creation of “RPGLite” , amultiplayer mobile game. In creating RPGLite withno development expertise we learned a series of lessonsabout effective amateur game development for researchpurposes. In this paper we reflect on the entire devel-opment process and present these lessons.
INTRODUCTION
Procuring datasets to validate theoretical researchfindings can be difficult. Industrial sources for thisdata rarely have aligned interests with the researcherswho require them. Academic datasets can often beover-specialised to the domain of the team who orig-inally released them. While there is a requirementfor more generally applicable and well-documenteddatasets for academic use, simple modern tooling hasallowed for researchers to develop their own systems toobtain well-scoped datasets under their own control.Having developed techniques for generating syntheticgameplay data we required an appropriate dataset forcomparison, so developed a mobile game to create itfor us. This provided the opportunity for us to de-sign a high-quality game to generate the data we re-quired cheaply and easily. Since its release in April2020, RPGLite has been more successful than we an-ticipated, populating a substantial dataset which will RPGLite is available from https://rpglite.app/ be of great value to our own research, and, we believe,to the wider research community.In this paper we detail our experience of creatingRPGLite, including its planning, implementation, test-ing and deployment. The motivation for sharing thisexperience report was our own frustration at havingnothing similar to support us when we embarked onthis project. In providing reflections upon the suc-cesses and failures of our approach, we hope that thispaper can guide other researchers considering a similarproject.The key contributions of this paper include: (i) De-scriptions of the lessons learned in developing a mobilegame for research purposes; (ii) An outline of how asimilar application can be released with no funding ordevelopment experience, and; (iii) A frank discussionof the mistakes made and an analysis of how we couldhave produced a richer source of data more efficiently.In the following section we describe why we neededto generate this dataset rather than use data froman already existing game. We go on to discuss thedesign of the game and the resulting implementa-tion. In the body of the paper we present the fourkey lessons learned from the experience: how we cameto learn them and how future researchers can usethem to assist their development processes. Finallywe summarise our contribution and detail future workwhich will follow as a result of our data collection fromRPGLite.
MOTIVATION
In recent research we have developed an approachwhich uses model checking to analyse the balance andmetagame development of a game. We refer to thisapproach as Chained Strategy Generation (CSG) (Ka-vanagh et al. 2019). We use the PRISM model checker(Kwiatkowska et al. 2011) and the PRISM-Games ex-tension (Kwiatkowska et al. 2018), a probabilistic en-gine for analysis of various Markov models, includingDiscrete Time Markov Chains (DTMCs), which arepurely probabilistic, and Markov Decision Processes(MDPs) which also involve non-deterministic choice.PRISM allows us to specify quantitative properties a r X i v : . [ c s . MM ] J a n uch as “what is the probability that event e hap-pens?” (for DTMCs), or “for all possible sequencesof choices, what is the greatest probability that event e happens?” (for MDPs). In our approach we definea model representing our game and use PRISM to de-termine player strategies of interest (here a strategycorresponds to a sequence of choices). For example, todetermine a strategy for player 1 that corresponds tothe best probability of winning, we check the property“what is the maximum likelihood that player 1 winsthe game?”. As well as returning this maximum prob-ability, the model checker also allows us to extract theplayer strategy that achieves it. This process is knownas strategy synthesis (Kwiatkowska and Parker 2016)and is used systematically throughout the CSG pro-cess in which we examine how strategies evolve overtime as players adopt optimal strategies. Model check-ing is computationally expensive and so this approachwould not be suitable for a more elaborate game, forexample, where multiple objects have highly-precise3D positions. Although PRISM has been used to ver-ify soundness properties in simple 2D games (Rezinet al. 2018), most modern games are too complex tobe modelled accurately in this way without overlycompromising abstractions. For model checking to befeasible on current hardware, a system must have nomore than roughly 10 states.In order to demonstrate that the outcomes of CSGrepresent a realistic evolution of a game, we requireddata from a real game that was elaborate enough torequire considered decision making from players with-out being so complex as to prevent the use of modelchecking. We also required a system where player datacould be compared for multiple configurations to allowa comparison of their respective balance. Finding apre-existing game satisfying both of these requirementsseemed unlikely, so we extended an existing case studyto something more akin to a real game, with a viewto developing it as a mobile application from which tocollect data. This way we could perform CSG analy-sis (on models) to find candidate configurations thatpromised theoretical levels of balance, before releasingthe game and testing how they performed in practice.Having control of the game, and the data it allowed usto collect, put us in the unique position of being ableto design a system to generate data that was not onlyuseful to us (in terms of our game balance research),but also in its own right in the context of system mod-elling. It is our intention to publish our dataset in fullin the near future. Real-world datasets of sufficientlywell specified systems are rare, so we will specify oursystem and the nature of the data it produces in asmuch detail as possible — however this is outside thescope of this paper. DESIGN DETAILS & REQUIREMENTS
RPGLite, the game, is defined by its rules , mechanics and configuration . We present these here. In later sec-tions, “RPGLite” is used solely to refer to RPGLite,the application. Rules
RPGLite is a two-player, turn-based game in whicheach player chooses a pair of unique characters from apool of eight. Each character has a unique action andthree attributes: health, accuracy and damage. Somehave additional attributes described by their action.On their turn, a player chooses the action of one oftheir alive characters and targets an opposing charac-ter with their action. That action will succeed or failbased on the acting character’s accuracy value. Play-ers can choose to skip on their turn or to forfeit thegame at any time. A coin is flipped to decide whichplayer goes first and the winner is the player who isfirst to reduce both of their opposing characters healthvalues to 0.
Mechanics
The mechanics of RPGLite are encapsulated in theeight characters and their actions:
Knight: targets a single opponent;
Archer: targets up to two opponents;
Healer: targets a single opponent and heals a dam-aged ally or themselves;
Rogue: targets a single opponent and does additionaldamage to heavily damaged targets;
Wizard: targets a single opponent and stuns them,preventing their action from being used on theirsubsequent turn;
Barbarian: targets a single opponent and does addi-tional damage when heavily damaged themselves;
Monk: targets a single opponent and continues turnuntil a target is missed, and;
Gunner: targets a single opponent, does some dam-age even on failed actions;The additional attributes needed to describe the char-acters fully are the heal value of the Healer, the heav-ily damaged value for the Rogue (the execute range ),the heavily damaged value for the Barbarian (the ragethreshold ), the increased damage value for the Barbar-ian (the rage damage ) and the miss damage ( graze ) forthe Gunner. onfiguration
In total there are 29 attributes for the characters inRPGLite. A configuration for RPGLite is a set of val-ues for each attribute. These attributes are the param-eters we tune in an attempt to balance the game. Theapplication was released with a configuration which wesuspected of being balanced based on automated anal-ysis. After a significant number of games were played,the application was updated with a new configuration(dubbed “season two”), with the aim of maintainingplayer interest. The new configuration had altered at-tributes for seven of the characters, for example theHealer’s health value decreased from 10 to 9 and theiraccuracy increased from 0.85 to 0.9. Only the Wizardremained the same between configurations.
THE FINAL PRODUCT
As context for the reflections in the rest of this paper,it is necessary to describe the application that was ac-tually built. This finished product is a combination ofimplementation details and the design decisions thatlead to their implementation.
Design
RPGLite was designed to be simple to understand andplay, so as to keep players interested and reduce barri-ers to entry. On logging in, players are presented withfive “slots” for games, which can have a number ofstates: • Unused, waiting for a game to be made • Added to a queue of players waiting for a randommatch to be made • In an active game, – Waiting for a move to be made by the player – Waiting for an opponent to make a moveOn starting a new game, players choose two uniquecharacters from a set of eight and are presented withcards representing their chosen pair opposite againsttheir opponent’s. Animations of character cards areused to indicate whether the player can make a move,as well as an on-screen prompt. The application wasdesigned to be as frictionless as possible to use, al-though, as discussed in lesson 4, we found some userswere still confused and we simplified the design furthervia iteration.Additional features were implemented specifically toencourage player retention. As can be seen in fig. 2,peripheral systems around the core game, such asmedals for players to earn and leaderboards to climb, Figure 1: Screenshots of the released applicationshowing a player’s home screen ( left ) and a game inprogress ( right )were intended to give players goals to achieve and areason to stay invested in the game.As players were technically experiment participants, itwas necessary to have them “sign” an ethics-approvedconsent form and be delivered an information sheet.We implemented this by requiring players to scrollthrough a panel containing their consent form andinformation sheet on registration, and explicitly tickboxes confirming that they consented to all necessaryparts and were over 15 years old.
Implementation
RPGLite was implemented as a mobile game writtenin Unity. We chose Unity for it’s ability to compilethe same project to both iOS and Android and it’sactive community with numerous video tutorials forbeginners. RPGLite made requests to a public-facingREST api, written in Python3 and run on university-provided servers with a firewall under the controlof the institution’s IT services. This server initiallyhanded data processed in the client to the database toavoid a direct connection (and the risk of exposing thedatabase publicly), but became a larger aspect of theengineering as discussed in lesson 3. The project col-lected and stored its data within a MongoDB databasealso hosted locally within the university.
ESSON 1: RESIST TEMPTATION
At many points in the development process, we foundit difficult to constrain the feature set of the end prod-uct. The unbounded nature of the project led to ad-ditional features being implemented as developmentbecame a “labour of love”. These delayed the de-livery of the game, and few new ideas were actuallydiscarded. Only some of these features were benefi-cial to the player experience. An illustrative exampleis the comparison of two such “peripheral systems”:the leaderboard and players’ profiles. Players load theleaderboard roughly three times as often as their pro-files and, anecdotally, they are a central componentof player retention. An equal amount of effort wasspent on each. During development it was impossibleto know how often a feature would be used in practice.Figure 2: Screenshots of peripheral systems imple-mented towards the end of development. The leader-board screen ( left ) shows a player’s skill points com-pared to all others. The player profile screen ( right )displays usage data for all characters and the medalsearned.The ideas that came to us during development weresometimes essential to the project’s success, and to re-sist all of these would have resulted in a poorer prod-uct. The danger we identified in our own endeavourswas a desire to implement these ideas for their ownsake , and not for their benefit to our end product.New ideas must be abandoned where their benefit doesnot outweigh the additional time they would demand.An agile development approach is the best in these scenarios, where requirements naturally change overtime.Much like implementing new features, we found thatthe refinement of existing features risked an emo-tional investment. Existing design components, suchas colour schemes and layouts of minor UI elements,were constantly changed prior to release. We foundthe adage, “don’t let the perfect be the enemy of thegood”, useful in such moments.We struggled to resist temptation because of our in-experience with app development, our lack of a thor-ough plan and the fact that we were co-developing andtherefore reticent to shoot down each other’s ideas.For other developers in similar situations to our own,we recommend a more structured approach. Firstly, aproject should have a plan produced at its inception,which is maintained throughout the development pro-cess. Second, we suggest adding to this plan a “mar-gin”; a block of unallocated time at the end of theproject that can be spent on developing new ideas. Asdevelopment progresses, this margin can be “spent” onnew ideas or refinements to existing design elements.This facilitates necessary discussions by framing themwithin the context of a shared resource.
LESSON 2: EMPLOY AVAILABLE RE-SEARCH NETWORKS
Advertising is a major cost of app development; newusers are expensive. With no money for player recruit-ment we were forced to promote the application in asimilar way to other research experiments within auniversity context, through participant calls in mail-ing lists and departmental announcements. Beyondthis we sought out opportunities for free publicityfrom within our research community. We found thatthere is an appetite for open data and by encouragingpeople to play our game “for science” our promotionswere better received. We anticipated undergraduatestudents would make up the majority of our users.However, while promotions targeted at undergradu-ates introduced a large number of users, those userstended to only complete a few games before stopping.For our research we wanted to investigate how play-ers learn over time, we needed high player retention toallow users time to “learn” the system. We observedthat retention was highest within players who had avested interest in us or the research itself, or when thegame was adopted by users from a social clique.In comparing events that we expected would increaseplayer numbers with their effects on new users andgames played (a suitable measure of data genera-tion) fig. 3, the retention of the different groups re-cruited is pronounced. Over half of our users failed tosuccessfully complete a single game, and several usersinstalled the app without registering an account. Weare fortunate enough to know the chair of the Scottishigure 3: The rate of user acquisition in the weeks fol-lowing RPGLite’s release. Important events are alsomarked: promotion of the application through theScottish International Game Developers Associationbranch, an email to Computing Science undergrad-uates, the date from which UK citizens were told tostay inside if at all possible, the time of a major up-date to the game and an email to all Science and Engi-neering undergraduates at the University of Glasgow.branch of the International Game Developers Asso-ciation (IGDA) who kindly shared an advert for thegame. The increase in the speed of game completionsaccompanying the influx of new users from his involve-ment shows that those players were valuable data gen-erators. The figure also shows that the large intake ofundergraduate students from Science and Engineeringonly caused a brief uptake in activity, which quicklydissipated. We believe this is due to either the lack ofa relationship with us as the developers or of interestin games research. We also assumed that a large up-date might increase activity, but found that not to bethe case. A single large update changing the configu-ration of the game, adding seasonal leaderboards andimproving existing features had no noticeable effect onthe number of games completed. The extent to whichour data comes from a small subset of users is shownin fig. 4.Figure 4: The number of users to have played at leasta given number of games. Throughout development we sought advice from thosearound us with relevant experience. Many of our uni-versity colleagues had been involved in various aspectsof application development and deployment, and ad-vised us throughout. For example, a web designer gaveadvice on UX design and a gamification researchersuggested various incentivisation systems. We also re-lied heavily on our department’s IT services team forsupport in deploying the middleware server and ad-ministrative staff for promoting the app once it hadbeen released. Application development is multifacetedand the support of our peers was important in areaswhere our skills were insufficient.Without the extensive use of the research communi-ties we belong to, RPGLite would have been an infe-rior application, producing a less rich dataset. Thereare numerous skills required to develop a system thatpeople will use willingly. Engaging peers early in theprocess and being clear in your aims will highlight theareas in which you need support. Where user retentionis important your research community is vital, as theyalready have a connection to you which will see theminvested in the project from the outset. Your individ-ual network is unlikely to be enough to generate a sig-nificant dataset, so we recommend engaging colleaguesto advertise on your behalf. RPGLite never soughtto compete with professionally developed games, butthrough our various communities we manged to gener-ate enough interest for a steady playerbase.
LESSON 3: THE SMALLER THE CLIENT,THE BETTER
The one aspect of RPGLite’s implementation that wemost regret is the amount of game logic being deliv-ered to players in the mobile client rather than theserver. There are many reasons for this, the mainone being that the server could be replaced immedi-ately if a bug were to be found. This is in contrast tocompiling, re-installing and re-testing attempts to fixthe given bug, were it to reside in the client. Fixingserver-side bugs allowed more rapid iteration when fix-ing those with origins we did not understand.The need for moving logic out of our client becameapparent after we had pushed production code to appstores and had real players taking part in our experi-ment. A particularly dedicated player discovered a bugwhere, after playing enough games, characters thathad been unlocked through repeated play would be-come locked again and could no longer be accessed.Had this bug been in the server, the issue could havebeen fixed, and a new version deployed in seconds thatlightweight clients could connect to. With our largerclient, this required testing in Unity, testing on-device(to ensure that there weren’t platform-specific bugs),and deployment to app stores for approval and distri-bution. This process took days, even though the bugas trivial to fix.Large clients also risk introducing a duplication ofcode when paired with a secure server. To validategame logic computed by a client, servers must repli-cate much of the processing the client previously per-formed, to verify that a malicious user hasn’t suppliedcorrupted game states. This process requires the im-plementation of game logic within the server. As a re-sult, a secure server must include game logic regardlessof whether the client does. This means spending time,an already scarce resource, on duplicated code. This isanother reason we recommend developing a lightweightclient, leaving the majority of computation to a largerserver.When we realised that we had produced a large client,we made efforts to move to a more server-centric de-sign. For example, we considered sending push notifi-cations via APIs directly written into our client. How-ever, the flexibility and control of implementing thisserver-side caused us to move our notification code tothe server. After this, we implemented much of ourperipheral systems logic in the server, including theleaderboard, medal logic, password reset, and much ofthe matchmaking systems.Overall, we found that areas where the client waslightweight allowed more rapid prototyping and bug-fixing. We recommend other projects be constructedwith a small client for these reasons, as well as avoid-ing duplication of code and a reduction in applicationsize by limiting client-side dependencies.
LESSON 4: TEST EARLY, TEST OFTEN
The best source of feedback and advice we receivedwas from the shared document we circulated along-side our two private test releases. We specifically chosefriends and colleagues who knew us well enough to beable to have honest discussions on the weaker aspectsof the application. We carried out the testing by shar-ing Android application packages with Android usersand inviting iOS users to participate in private betatesting via Apple’s TestFlight system. We were ableto implement the majority of the suggestions made,many of which have become central components in thefinal game. This stage highlighted the importance ofpush notifications and streamlining the user experi-ence. Specifically, our test users found that they wouldoften forget to check whether they had moves to make.Before testing we had investigated the feasibility ofimplementing push notifications, but were unsure ifthey were worth the time to develop. Following testingfeedback, we made this a priority.The user interface, colour scheme and card art of thefinal application are a result of feedback from our testusers. As shown in fig. 5, the cards went through aseries of designs. Responding to test feedback thatcharacter cards were too complicated, the final designs Figure 5: Evolution of the Barbarian card artworkthroughout the design process from initial prototype( left ), to internal testing version ( centre ) and currentversion ( right )were significantly simpler. We also received specificadvice, such as blacking out the action description ofa stunned character to make it clear that they couldnot act. Having an ongoing dialogue throughout de-velopment with invested parties, meant that we couldrapidly pivot to accommodate their suggestions.From analysis of our test data we discovered a gap be-tween the data we were collecting and possible usefulinformation we could capture. Specifically, we realisedwe could log user interactions with the application,noting actions they performed, when they performedthem and what the result was if any (for example, “auser searched for another by their username and foundthey had no free game slots”). This idea was a resultof realising that even amongst our dozen test users,there were distinct styles of interacting with the appli-cation. We thought that classifying these interactionstyles would be of interest.Testing allowed us to identify areas in both the appli-cation and the dataset that were lacking. We wouldencourage future researchers to get early versions oftheir applications into the hands of testers multipletimes before finalising their system. There were manyimprovements made to RPGLite specifically becausewe had others test it, and could assess it across asuite of target devices. We structured the format ofthe feedback we received from testers in our shareddocument by grouping requested feedback under spe-cific headings and directing them to features in whichwe lacked confidence. This helped to scaffold the in-sightful conversations amongst our test users, and westrongly recommend others make an effort to facilitatea similar dialogue.
CONCLUSION
In releasing RPGLite we learned several lessons aboutthe realities of mobile game development within re-search. We have outlined our key insights and hopethat these will be helpful to researchers develop-ing similar tools. To summarise, the lessons that weearned are: to beware of scope creep and lengthy fea-ture refinement ; to utilise ones research communityfor their expertise and willingness to contribute; tostructure the application to permit rapid bug-fixing,and to avoid duplication of code, and; to test as soonas you have a workable build and to continue testingup until release.Pausing our research to develop a mobile game wasan atypical activity. We hope that these observa-tions are helpful to other researchers developing sim-ilar projects. If they are, we encourage them to docu-ment the methodologies they follow for building data-generating games for the benefit of others engaged insimilar projects, and the lessons they learned doing so.
FUTURE WORK
This paper details the experience of developing a mo-bile game for data collection. The next stage of ourresearch is the processing and analysis of this dataset.We intend to explore many research questions using it,with some pertaining to the dataset itself and analysisof optimal play, and others, to the accurate simulationof RPGLite players.We will release the full dataset collected by RPGLitealongside the code constituting the game client andserver in a future publication. This will include col-lections of all players, all games played, and all inter-actions recorded within the application. In addition,this dataset will include complete information aboutthe games played such as moves made, characters cho-sen, and other details used in our own research. Thesecollections include all the attributes we envisaged asbeing useful to future research. For example, a playerdocument includes their username, played/won countsfor each character, other players they have lost gamesagainst, skill points, and more. We intend to omit onlysensitive details, as all collected data is anonymised,and users have indicated through our registration pro-cess their consent for collected data to be disseminatedthrough the academic community in the spirit of openscience.
System Simulation
Datasets sourced from sufficiently scaled and well-detailed systems are rare. Some are made available foracademic use (Van Dongen, B.F. (Boudewijn) 2015),but available data typically originates from large in-dustrial systems lacking public specification for com-petitive reasons, or from well-scaled systems whichlack the supporting detail to be useful. We are there-fore interested in taking small datasets from systemsof a manageable size, and producing supplementarysynthetic data which appears plausibly realistic. Webelieve this data can be produced by an applicationof aspect-oriented programming (Wallis and Storer 2018a). A small, naive simulation of behaviour is mod-ified via applied aspects to introduce errors and im-provements. We are in the process of developing sim-ulations of RPGLite play and aspects to improve thesimulation’s realism. We aim to verify that this pro-duces plausibly realistic synthetic datasets by compari-son with RPGLite’s empirically sourced data.Assuming this work is successful, we intend to showthat aspects can “fit” themselves to real-world data.We expect these to produce datasets with optimal sim-ilarity to empirical counterparts via the applicationof genetic algorithms on their parameters(Wallis andStorer 2018b). A corollary of this approach would bethat, in addition to highly realistic simulations, aspectparameters would then describe the nature of real-world agents. This process could then be used as alens through which to analyse actual behaviour, weigh-ing various influences by their importance.
Game Development and Player Analysis
As described in the motivation, we have developedtools that use model checking to assess game balancewithout gameplay data. RPGLite was originally in-tended solely to verify this process with both quanti-tative analysis and qualitative player feedback. Basedon the findings of our model checking analysis, we be-lieve both of the configurations released for RPGLiteare balanced, but one is “more balanced” than theother. Calculating the extent of this and comparingour metagame predictions to what was observed in thedataset when players explored RPGLite will measurethe validity of our approach.RPGLite is a bounded system that can be modelchecked, this allows for highly specific analysis ofplayer actions. We can calculate the cost of anymove made in the game as the difference between theplayer’s subsequent probability of winning and theirprobability having chosen the best move available.By comparing the costs of the moves a player makesover time we can measure their rate of learning with-out considering their opponents. The effect of havingdefinitive measures of player mistakes for gameplayanalysis is a research area which is of great interestto us. This could help us answers questions about thesituations in which players make mistakes and whatcauses them. Beyond game research, this could poten-tially lead to aiding the design of systems which aimto minimise human interaction errors.
ACKNOWLEDGEMENTS
We could not have released RPGLite without signif-icant input from our colleagues and friends. In par-ticular, we would like to acknowledge Ellen Wallace,Marta Araldo, Justin Nichol, Craig Reilly, Adam Els-bury, Alistair Morrison, Chris McGlashan, Francesooper, Brian McKenna, and our test players. Thework was partly supported by Obashi Technologies.
REFERENCES
Kavanagh W.J.; Miller A.; Norman G.; and Andrei O.,2019.
Balancing Turn-Based Games with ChainedStrategy Generation . IEEE Transactions on Games .Kwiatkowska M.; Parker D.; and Wiltsche C., 2018.
PRISM-games: verification and strategy synthesisfor stochastic multi-player games with multiple ob-jectives . STTT , 20, no. 2, 195–210.Kwiatkowska M.Z.; Norman G.; and Parker D., 2011.
PRISM 4.0: Verification of Probabilistic Real-timeSystems . In
Proc. Int. Conf. Computer Aided Verifi-cation (CAV’11) . Springer, vol. 6806, 585–591.Kwiatkowska M.Z. and Parker D., 2016.
AutomatedVerification and Strategy Synthesis for ProbabilisticSystems . In
Proceedings of Automated Technologyfor Verification and Analysis (ATVA’16) . Springer,5–52.Rezin R.; Afanasyev I.; Mazzara M.; and Rivera V.,2018.
Model checking in multiplayer games develop-ment . In . IEEE, 826–833.Van Dongen, B.F. (Boudewijn), 2015.
BPIChallenge 2015 . doi:10.4121/UUID:31A308EF-C844-48DA-948C-305D167A0EC1.URL https://data.4tu.nl/repository/uuid:31a308ef-c844-48da-948c-305d167a0ec1 .Wallis T. and Storer T., 2018a.
Modelling realistic userbehaviour in information systems simulations asfuzzing aspects . In
International Conference on Ad-vanced Information Systems Engineering . Springer,254–268.Wallis T. and Storer T., 2018b.