Diego Perez-Liebana
University of Essex
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Diego Perez-Liebana.
IEEE Transactions on Computational Intelligence and Ai in Games | 2016
Diego Perez-Liebana; Spyridon Samothrakis; Julian Togelius; Tom Schaul; Simon M. Lucas; Adrien Couëtoux; Jerry Lee; Chong-U Lim; Tommy Thompson
This paper presents the framework, rules, games, controllers, and results of the first General Video Game Playing Competition, held at the IEEE Conference on Computational Intelligence and Games in 2014. The competition proposes the challenge of creating controllers for general video game play, where a single agent must be able to play many different games, some of them unknown to the participants at the time of submitting their entries. This test can be seen as an approximation of general artificial intelligence, as the amount of game-dependent heuristics needs to be severely limited. The games employed are stochastic real-time scenarios (where the time budget to provide the next action is measured in milliseconds) with different winning conditions, scoring mechanisms, sprite types, and available actions for the player. It is a responsibility of the agents to discover the mechanics of each game, the requirements to obtain a high score and the requisites to finally achieve victory. This paper describes all controllers submitted to the competition, with an in-depth description of four of them by their authors, including the winner and the runner-up entries of the contest. The paper also analyzes the performance of the different approaches submitted, and finally proposes future tracks for the competition.
genetic and evolutionary computation conference | 2016
Ahmed Khalifa; Diego Perez-Liebana; Simon M. Lucas; Julian Togelius
This paper presents a framework and an initial study in general video game level generation, the problem of generating levels for not only a single game but for any game within a specified range. While existing level generators are tailored to a particular game, this new challenge requires generators to take into account the constraints and affordances of games that might not even have been designed when the generator was constructed. The framework presented here builds on the General Video Game AI framework (GVG-AI) and the Video Game Description Language (VGDL), in order to reap synergies from research activities connected to the General Video Game Playing Competition. The framework will also form the basis for a new track of this competition. In addition to the framework, the paper presents three general level generators and an empirical comparison of their qualities.
2016 8th Computer Science and Electronic Engineering (CEEC) | 2016
Raluca D. Gaina; Diego Perez-Liebana; Simon M. Lucas
This paper presents a new track of the General Video Game AI competition for generic Artificial Intelligence agents, which features both competitive and cooperative real time stochastic two player games. The aim of the competition is to directly test agents against each other in more complex and dynamic environments, where there is an extra uncertainty in a game, consisting of the behaviour of the other player. The framework, server functionality and general competition setup are analysed and the results of the experiments with several sample controllers are presented. The results indicate that currently Open Loop Monte Carlo Tree Search is the overall leading algorithm on this set of games.
IEEE Transactions on Computational Intelligence and Ai in Games | 2017
Miguel Nicolau; Diego Perez-Liebana; Michael O'Neill; Anthony Brabazon
Computer games are highly dynamic environments, where players are faced with a multitude of potentially unseen scenarios. In this paper, AI controllers are applied to the Mario AI benchmark platform, by using the grammatical evolution system to evolve behavior tree structures. These controllers are either evolved to both deal with navigation and reactiveness to elements of the game or used in conjunction with a dynamic A* approach. The results obtained highlight the applicability of behavior trees as representations for evolutionary computation and their flexibility for incorporation of diverse algorithms to deal with specific aspects of bot control in game environments.
congress on evolutionary computation | 2017
Kamolwan Kunanusont; Raluca D. Gaina; Jialin Liu; Diego Perez-Liebana; Simon M. Lucas
This paper describes a new evolutionary algorithm that is especially well suited to AI-Assisted Game Design. The approach adopted in this paper is to use observations of AI agents playing the game to estimate the games quality. Some of best agents for this purpose are General Video Game AI agents, since they can be deployed directly on a new game without game-specific tuning; these agents tend to be based on stochastic algorithms which give robust but noisy results and tend to be expensive to run. This motivates the main contribution of the paper: the development of the novel N-Tuple Bandit Evolutionary Algorithm, where a model is used to estimate the fitness of unsampled points and a bandit approach is used to balance exploration and exploitation of the search space. Initial results on optimising a Space Battle game variant suggest that the algorithm offers far more robust results than the Random Mutation Hill Climber and a Biased Mutation variant, which are themselves known to offer competitive performance across a range of problems. Subjective observations are also given by human players on the nature of the evolved games, which indicate a preference towards games generated by the N-Tuple algorithm.
european conference on applications of evolutionary computation | 2017
Raluca D. Gaina; Jialin Liu; Simon M. Lucas; Diego Perez-Liebana
Monte Carlo Tree Search techniques have generally dominated General Video Game Playing, but recent research has started looking at Evolutionary Algorithms and their potential at matching Tree Search level of play or even outperforming these methods. Online or Rolling Horizon Evolution is one of the options available to evolve sequences of actions for planning in General Video Game Playing, but no research has been done up to date that explores the capabilities of the vanilla version of this algorithm in multiple games. This study aims to critically analyse the different configurations regarding population size and individual length in a set of 20 games from the General Video Game AI corpus. Distinctions are made between deterministic and stochastic games, and the implications of using superior time budgets are studied. Results show that there is scope for the use of these techniques, which in some configurations outperform Monte Carlo Tree Search, and also suggest that further research in these methods could boost their performance.
congress on evolutionary computation | 2017
Jialin Liu; Diego Perez-Liebana; Simon M. Lucas
The Random Mutation Hill-Climbing algorithm is a direct search technique mostly used in discrete domains. It repeats the process of randomly selecting a neighbour of a best-so-far solution and accepts the neighbour if it is better than or equal to it. In this work, we propose to use a novel method to select the neighbour solution using a set of independent multi-armed bandit-style selection units which results in a bandit-based Random Mutation Hill-Climbing algorithm. The new algorithm significantly outperforms Random Mutation Hill-Climbing in both OneMax (in noise-free and noisy cases) and Royal Road problems (in the noise-free case). The algorithm shows particular promise for discrete optimisation problems where each fitness evaluation is expensive.
congress on evolutionary computation | 2017
Jialin Liu; Julian Togelius; Diego Perez-Liebana; Simon M. Lucas
Most games have, or can be generalised to have, a number of parameters that may be varied in order to provide instances of games that lead to very different player experiences. The space of possible parameter settings can be seen as a search space, and we can therefore use a Random Mutation Hill Climbing algorithm or other search methods to find the parameter settings that induce the best games. One of the hardest parts of this approach is defining a suitable fitness function. In this paper we explore the possibility of using one of a growing set of General Video Game AI agents to perform automatic play-testing. This enables a very general approach to game evaluation based on estimating the skill-depth of a game. Agent-based play-testing is computationally expensive, so we compare two simple but efficient optimisation algorithms: the Random Mutation Hill-Climber and the Multi-Armed Bandit Random Mutation Hill-Climber. For the test game we use a space-battle game in order to provide a suitable balance between simulation speed and potential skill-depth. Results show that both algorithms are able to rapidly evolve game versions with significant skill-depth, but that choosing a suitable resampling number is essential in order to combat the effects of noise.
computational intelligence and games | 2017
Ahmed Khalifa; Michael Cerny Green; Diego Perez-Liebana; Julian Togelius
We introduce the General Video Game Rule Generation problem, and the eponymous software framework which will be used in a new track of the General Video Game AI (GVGAI) competition. The problem is, given a game level as input, to generate the rules of a game that fits that level. This can be seen as the inverse of the General Video Game Level Generation problem. Conceptualizing these two problems as separate helps breaking the very hard problem of generating complete games into smaller, more manageable subproblems. The proposed framework builds on the GVGAI software and thus asks the rule generator for rules defined in the Video Game Description Language. We describe the API, and three different rule generators: a random, a constructive and a search- based generator. Early results indicate that the constructive generator generates playable and somewhat interesting game rules but has a limited expressive range, whereas the search- based generator generates remarkably diverse rulesets, but with an uneven quality.
congress on evolutionary computation | 2016
Diego Perez-Liebana; Sanaz Mostaghim; Simon M. Lucas
The design of algorithms for Game AI agents usually focuses on the single objective of winning, or maximizing a given score. Even if the heuristic that guides the search (for reinforcement learning or evolutionary approaches) is composed of several factors, these typically provide a single numeric value (reward or fitness, respectively) to be optimized. Multi-Objective approaches are an alternative concept to face these problems, as they try to optimize several objectives, often contradictory, at the same time. This paper proposes for the first time a study of Multi-Objective approaches for General Video Game playing, where the game to be played is not known a priori by the agent. The experimental study described here compares several algorithms in this setting, and the results suggest that Multi-Objective approaches can perform even better than their single-objective counterparts.