Antonio Fernández-Ares
University of Granada
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Antonio Fernández-Ares.
Journal of Computer Science and Technology | 2012
Antonio M. Mora; Antonio Fernández-Ares; Juan J. Merelo; Pablo García-Sánchez; Carlos M. Fernandes
This paper investigates the performance and the results of an evolutionary algorithm (EA) specifically designed for evolving the decision engine of a program (which, in this context, is called bot) that plays Planet Wars. This game, which was chosen for the Google Artificial Intelligence Challenge in 2010, requires the bot to deal with multiple target planets, while achieving a certain degree of adaptability in order to defeat different opponents in different scenarios. The decision engine of the bot is initially based on a set of rules that have been defined after an empirical study, and a genetic algorithm (GA) is used for tuning the set of constants, weights and probabilities that those rules include, and therefore, the general behaviour of the bot. Then, the bot is supplied with the evolved decision engine and the results obtained when competing with other bots (a bot offered by Google as a sparring partner, and a scripted bot with a pre-established behaviour) are thoroughly analysed. The evaluation of the candidate solutions is based on the result of non-deterministic battles (and environmental interactions) against other bots, whose outcome depends on random draws as well as on the opponents’ actions. Therefore, the proposed GA is dealing with a noisy fitness function. After analysing the effects of the noisy fitness, we conclude that tackling randomness via repeated combats and reevaluations reduces this effect and makes the GA a highly valuable approach for solving this problem.
congress on evolutionary computation | 2011
Antonio Fernández-Ares; Antonio M. Mora; J. J. Merelo; Pablo García-Sánchez; Carlos M. Fernandes
This paper describes an Evolutionary Algorithm for evolving the decision engine of a bot designed to play the Planet Wars game. This game, which has been chosen for the Google Artificial Intelligence Challenge in 2010, requires that the artificial player is able to deal with multiple objectives, while achieving a certain degree of adaptability in order to defeat different opponents in different scenarios. The decision engine of the bot is based on a set of rules that have been defined after an empirical study. Then, an Evolutionary Algorithm is used for tuning the set of constants, weights and probabilities that define the rules, and, therefore, the global behavior of the bot. The paper describes the Evolutionary Algorithm and the results attained by the decision engine when competing with other bots. The proposed bot defeated a baseline bot in most of the playing environments and obtained a ranking position in top-20% of the Google Artificial Intelligence competition.
computational intelligence and games | 2012
Antonio Fernández-Ares; Pablo García-Sánchez; Antonio M. Mora; J. J. Merelo
This paper presents a proposal for a fast on-line map analysis for the RTS game Planet Wars in order to define specialized strategies for an autonomous bot. This analysis is used to tackle two constraints of the game, as featured in the Google AI Challenge 2010: the players cannot store any information from turn to turn, and there is a limited action time of just one second. They imply that the bot must analyze the game map quickly, to adapt its strategy during the game. Based in our previous work, in this paper we have evolved bots for different types of maps. Then, all bots are combined in one, to choose the evolved strategy depending on the geographical configuration of the game in each turn. Several experiments have been conducted to test the new approach, which outperforms our previous version, based on an off-line general training.
european conference on applications of evolutionary computation | 2012
Antonio M. Mora; Antonio Fernández-Ares; Juan-Julián Merelo-Guervós; Pablo García-Sánchez
This work describes an evolutionary algorithm (EA) for evolving the constants, weights and probabilities of a rule-based decision engine of a bot designed to play the Planet Wars game. The evaluation of the individuals is based on the result of some non-deterministic combats, whose outcome depends on random draws as well as the enemy action, and is thus noisy. This noisy fitness is addressed in the EA and then, its effects are deeply analysed in the experimental section. The conclusions shows that reducing randomness via repeated combats and re-evaluations reduces the effect of the noisy fitness, making then the EA an effective approach for solving the problem.
international joint conference on computational intelligence | 2014
J. J. Merelo; Pedro A. Castillo; Antonio M. Mora; Antonio Fernández-Ares; Anna I. Esparcia-Alcázar; Carlos Cotta; Nuria Rico
In most computer games as in life, the outcome of a match is uncertain due to several reasons: the characters or assets appear in different initial positions or the response of the player, even if programmed, is not deterministic; different matches will yield different scores. That is a problem when optimizing a game-playing engine: its fitness will be noisy, and if we use an evolutionary algorithm it will have to deal with it. This is not straightforward since there is an inherent uncertainty in the true value of the fitness of an individual, or rather whether one chromosome is better than another, thus making it preferable for selection. Several methods based on implicit or explicit average or changes in the selection of individuals for the next generation have been proposed in the past, but they involve a substantial redesign of the algorithm and the software used to solve the problem. In this paper we propose new methods based on incremental computation (memory-based) or fitness average or, additionally, using statistical tests to impose a partial order on the population; this partial order is considered to assign a fitness value to every individual which can be used straightforwardly in any selection function. Tests using several hard combinatorial optimization problems show that, despite an increased computation time with respect to the other methods, both memory-based methods have a higher success rate than implicit averaging methods that do not use memory; however, there is not a clear advantage in success rate or algorithmic terms of one method over the other
IJCCI (Selected Papers) | 2016
J. J. Merelo; Zeineb Chelly; Antonio M. Mora; Antonio Fernández-Ares; Anna I. Esparcia-Alcázar; Carlos Cotta; P. de las Cuevas; Nuria Rico
In most computer games as in life, the outcome of a match is uncertain due to several reasons: the characters or assets appear in different initial positions or the response of the player, even if programmed, is not deterministic; different matches will yield different scores. That is a problem when optimizing a game-playing engine: its fitness will be noisy, and if we use an evolutionary algorithm it will have to deal with it. This is not straightforward since there is an inherent uncertainty in the true value of the fitness of an individual, or rather whether one chromosome is better than another, thus making it preferable for selection. Several methods based on implicit or explicit average or changes in the selection of individuals for the next generation have been proposed in the past, but they involve a substantial redesign of the algorithm and the software used to solve the problem. In this paper we propose new methods based on incremental computation (memory-based) or fitness average or, additionally, using statistical tests to impose a partial order on the population; this partial order is considered to assign a fitness value to every individual which can be used straightforwardly in any selection function. Tests using several hard combinatorial optimization problems show that, despite an increased computation time with respect to the other methods, both memory-based methods have a higher success rate than implicit averaging methods that do not use memory; however, there is not a clear advantage in success rate or algorithmic terms of one method over the other.
international conference on artificial neural networks | 2011
Antonio Fernández-Ares; Antonio M. Mora; J. J. Merelo; Pablo García-Sánchez; Carlos M. Fernandes
This paper proposes an Evolutionary Algorithm for finetuning the behavior of a bot designed for playing Planet Wars, a game that has been selected for the the Google Artificial Intelligence Challenge 2010. The behavior engine of the proposed bot is based on a set of rules established by means of heuristic experimentation, followed by the application of an evolutionary algorithm to set the constants, weights and probabilities needed by those rules. This bot eventually defeated the baseline bot used to design it in most maps, and eventually played in the Google AI competition, obtaining a ranking in the top 20%.
european conference on applications of evolutionary computation | 2014
Antonio Fernández-Ares; Antonio M. Mora; Maribel García-Arenas; Juan Julián Merelo Guervós; Pablo García-Sánchez; Pedro A. Castillo
This paper presents an approach based in an evolutionary algorithm, aimed to improve the behavioral parameters which guide the actions of an autonomous agent (bot) inside the real-time strategy game, Planet Wars. The work describes a co-evolutionary implementation of a previously presented method GeneBot, which yielded successful results, but focused in 4vs matches this time. Thus, there have been analyzed the effects of considering several individuals to be evolved (improved) at the same time in the algorithm, along with the use of three different fitness functions measuring the goodness of each bot in the evaluation. They are based in turns and position, and also in mathematical computations of linear regression and area regarding the number of ships belonging to the bot/individual to be evaluated. In addition, the variance of using an evolutionary algorithm with and without previous knowledge in the co-evolution phase is also studied, i.e., respectively using specific rivals to perform the evaluation, or just considering to this end individuals in the population being evolved. The aim of these co-evolutionary approaches are mainly two: first, reduce the computational time; and second find a robust fitness function to be used in the generation of evolutionary bots optimized for 4vs battles.
european conference on applications of evolutionary computation | 2016
J. J. Merelo; Pedro A. Castillo; Israel Blancas; G. Romero; Pablo García-Sánchez; Antonio Fernández-Ares; Víctor M. Rivas; Mario García-Valdez
Although performance is important, several other issues should be taken into account when choosing a particular language for implementing an evolutionary algorithm, such as the fact that the speed of different languages when carrying out an operation will depend on several factors, including the size of the operands, the version of the language and underlying factors such as the operating system. However, it is usual to rely on compiled languages, namely Java or C/C++, for carrying out any implementation without considering other languages or rejecting them outright on the basis of performance. Since there are a myriad of languages nowadays, it is interesting however to measure their speed when performing operations that are usual in evolutionary algorithms. That is why in this paper we have chosen three evolutionary algorithm operations: bitflip mutation, crossover and the fitness function OneMax evaluation, and measured the speed for several popular, and some not so popular, languages. Our measures confirm that, in fact, Java, C and C++ not only are the fastest, but also have a behaviour that is independent of the size of the chromosome. However, we have found other compiled language such as Go or interpreted languages such as Python to be fast enough for most purposes. Besides, these experiments show which of these measures are, in fact, the best for choosing an implementation language based on its performance.
european conference on applications of evolutionary computation | 2015
Antonio Fernández-Ares; Pablo García-Sánchez; Antonio M. Mora; Pedro A. Castillo; J. J. Merelo; María Isabel García Arenas; G. Romero
Evolutionary Algorithms (EAs) are frequently used as a mechanism for the optimization of autonomous agents in games (bots), but knowing when to stop the evolution, when the bots are good enough, is not as easy as it would a priori seem. The first issue is that optimal bots are either unknown (and thus unusable as termination condition) or unreachable. In most EAs trying to find optimal bots fitness is evaluated through game playing. Many times it is found to be noisy, making its use as a termination condition also complicated. A fixed amount of evaluations or, in the case of games, a certain level of victories does not guarantee an optimal result. Thus the main objective of this paper is to test several termination conditions in order to find the one that yields optimal solutions within a restricted amount of time, and that allows researchers to compare different EAs as fairly as possible. To achieve this we will examine several ways of finishing an EA who is finding an optimal bot design process for a particular game, Planet Wars in this case, with the characteristics described above, determining the capabilities of every one of them and, eventually, selecting one for future designs.