Nuria Rico
University of Granada
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Nuria Rico.
Applied Intelligence | 2012
Pedro A. Castillo; M. G. Arenas; Nuria Rico; Antonio M. Mora; Pablo García-Sánchez; Juan Luis Jiménez Laredo; J. J. Merelo
When search methods are being designed it is very important to know which parameters have the greatest influence on the behaviour and performance of the algorithm. To this end, algorithm parameters are commonly calibrated by means of either theoretic analysis or intensive experimentation. When undertaking a detailed statistical analysis of the influence of each parameter, the designer should pay attention mostly to the parameters that are statistically significant. In this paper the ANOVA (ANalysis Of the VAriance) method is used to carry out an exhaustive analysis of a simulated annealing based method and the different parameters it requires. Following this idea, the significance and relative importance of the parameters regarding the obtained results, as well as suitable values for each of these, were obtained using ANOVA and post-hoc Tukey HSD test, on four well known function optimization problems and the likelihood function that is used to estimate the parameters involved in the lognormal diffusion process. Through this statistical study we have verified the adequacy of parameter values available in the bibliography using parametric hypothesis tests.
international joint conference on computational intelligence | 2014
J. J. Merelo; Pedro A. Castillo; Antonio M. Mora; Antonio Fernández-Ares; Anna I. Esparcia-Alcázar; Carlos Cotta; Nuria Rico
In most computer games as in life, the outcome of a match is uncertain due to several reasons: the characters or assets appear in different initial positions or the response of the player, even if programmed, is not deterministic; different matches will yield different scores. That is a problem when optimizing a game-playing engine: its fitness will be noisy, and if we use an evolutionary algorithm it will have to deal with it. This is not straightforward since there is an inherent uncertainty in the true value of the fitness of an individual, or rather whether one chromosome is better than another, thus making it preferable for selection. Several methods based on implicit or explicit average or changes in the selection of individuals for the next generation have been proposed in the past, but they involve a substantial redesign of the algorithm and the software used to solve the problem. In this paper we propose new methods based on incremental computation (memory-based) or fitness average or, additionally, using statistical tests to impose a partial order on the population; this partial order is considered to assign a fitness value to every individual which can be used straightforwardly in any selection function. Tests using several hard combinatorial optimization problems show that, despite an increased computation time with respect to the other methods, both memory-based methods have a higher success rate than implicit averaging methods that do not use memory; however, there is not a clear advantage in success rate or algorithmic terms of one method over the other
IJCCI (Selected Papers) | 2016
J. J. Merelo; Zeineb Chelly; Antonio M. Mora; Antonio Fernández-Ares; Anna I. Esparcia-Alcázar; Carlos Cotta; P. de las Cuevas; Nuria Rico
In most computer games as in life, the outcome of a match is uncertain due to several reasons: the characters or assets appear in different initial positions or the response of the player, even if programmed, is not deterministic; different matches will yield different scores. That is a problem when optimizing a game-playing engine: its fitness will be noisy, and if we use an evolutionary algorithm it will have to deal with it. This is not straightforward since there is an inherent uncertainty in the true value of the fitness of an individual, or rather whether one chromosome is better than another, thus making it preferable for selection. Several methods based on implicit or explicit average or changes in the selection of individuals for the next generation have been proposed in the past, but they involve a substantial redesign of the algorithm and the software used to solve the problem. In this paper we propose new methods based on incremental computation (memory-based) or fitness average or, additionally, using statistical tests to impose a partial order on the population; this partial order is considered to assign a fitness value to every individual which can be used straightforwardly in any selection function. Tests using several hard combinatorial optimization problems show that, despite an increased computation time with respect to the other methods, both memory-based methods have a higher success rate than implicit averaging methods that do not use memory; however, there is not a clear advantage in success rate or algorithmic terms of one method over the other.
international joint conference on computational intelligence | 2015
Juan J. Merelo; Federico Liberatore; Antonio Fernández Ares; Rubén Jesús García; Zeineb Chelly; Carlos Cotta; Nuria Rico; Antonio M. Mora; Pablo García-Sánchez
Noise or uncertainty appear in many optimization processes when there is not a single measure of optimality or fitness but a random variable representing it. These kind of problems have been known for a long time, but there has been no investigation of the statistical distribution those random variables follow, assuming in most cases that it is distributed normally and, thus, it can be modelled via an additive or multiplicative noise on top of a non-noisy fitness. In this paper we will look at several uncertain optimization problems that have been addressed by means of Evolutionary Algorithms and prove that there is no single statistical model the evaluations of the fitness functions follow, being different not only from one problem to the next, but in different phases of the optimization in a single problem.
genetic and evolutionary computation conference | 2016
Juan J. Merelo; Pedro A. Castillo; Pablo García-Sánchez; Paloma de las Cuevas; Nuria Rico; Mario García Valdez
Using volunteers browsers as a computing resource presents several advantages, but it remains a challenge to fully harness the browsers capabilities and to model the users behavior so that those capabilities can be leveraged optimally. These are the objectives of this paper, where we present the results of several evolutionary computation experiments with different implementations of a volunteer computing framework called NodIO, designed to be easily deployable on freely available cloud resources. We use different implementations to find out which one is able to get the user to lend more computing cycles and test different problems to check the influence it has on said performance, as measured by the time needed to find a solution, but also by the number of users engaged. From these experiments we can already draw some conclusions, besides the fact that volunteer computing can be a valuable computing resource and that it is essential to be as open as possible with software and data: the user has to be kept engaged to obtain as many computing cycles as possible, the client has to be built to use the computer capabilities fully, and, finally, that the user contributions follow a common statistical distribution.
intelligent data analysis | 2013
M. G. Arenas; Nuria Rico; Antonio M. Mora; Pedro A. Castillo; J. J. Merelo
It is very important when search methods are being designed to know which parameters have the greatest influence on the behaviour and performance of the algorithm. To this end, algorithm parameters are commonly calibrated by means of either theoretic analysis or intensive experimentation. However, due to the importance of parameters and its effect on the results, finding appropriate parameter values should be carried out using robust tools to determine the way they operate and influence the results. When undertaking a detailed statistical analysis of the influence of each parameter, the designer should pay attention mostly to the parameters that are statistically significant. In this paper the ANOVA ANalysis Of the VAriance method is used to carry out an exhaustive analysis of an evolutionary algorithm method and the different parameters it requires. Following this idea, the significance and relative importance of the parameters regarding the obtained results, as well as suitable values for each of these, were obtained using ANOVA and post-hoc Tukeys Honestly Significant Difference tests on four well known function optimization problems. Through this statistical study we have verified the adequacy of parameter values available in the bibliography using parametric hypothesis tests.
Cybernetics and Systems | 2006
Ramón Gutiérrez Jáimez; Nuria Rico; Patricia Román-Román; Desirée Romero; J. J. Serrano; Francisco Torres-Ruiz
ABSTRACT In this article we propose a methodology for building a lognormal diffusion process with polynomial exogenous factors in order to fit data that present an exponential trend and show deviations with respect to an exponential curve in the observed time interval. We show that such a process approaches a nonhomogeneous lognormal diffusion and proves that it is specially useful in the case when external information (exogenous factors) about the process is not available even though the existence of these influences is clear. An application to the global man-made emissions of methane is provided.
international work-conference on artificial and natural neural networks | 2015
Nuria Rico; M. G. Arenas; Desirée Romero; J. M. Crespo; Pedro A. Castillo; J. J. Merelo
Many probabilistic models are frequently used for natural growth-patterns modelling and their forecasting such as the diffusion processes. The maximum likelihood estimation of the parameters of a diffusion process requires a system of equations that, for some cases, has no explicit solution to be solved. Facing that situation, we can approximate the solution using an optimization method. In this paper we compare five optimization methods: an Iterative Method, an algorithm based on Newton-Raphson solver, a Variable Neighbourhood Search method, a Simulated Annealing algorithm and an Evolutionary Algorithm. We generate four data sets following a Gompertz-lognormal diffusion process using different noise level. The methods are applied with these data sets for estimating the parameters which are present into the diffusion process. Results show that bio-inspired methods gain suitable solutions for the problem every time, even when the noise level increase. On the other hand, some analytical methods as Newton-Raphson or the Iterative Method do not always solve the problem whether their scores depend on the starting point for initial solution or the noise level hinders the resolution of the problem. In these cases, the bio-inspired algorithms remain as a suitable and reliable approach.
computer aided systems theory | 2013
Desiré Romero; Nuria Rico; Maribel G-Arenas
In this paper, a new non-homogeneous diffusion process is introduced, which is a combination between a Gompertz-type and a lognormal diffusion process, so that the mean function is a mixture between Gompertz and exponential curves. The main innovation of the process is that the trend, after reaches a bound, changes to be increasing or decreasing to zero, a situation that is not provided by the previous models. After building the model, a comprehensive study of its main characteristics is presented. Our goal is to use the process with preditive purpose, so how to get the estimations of the parameters of the process and theirs characteristics functions is presented in this paper. Finally, the potential of the new process to model epidemic data are illustrated by means of an application to simulated data.
trans. computational collective intelligence | 2016
Juan J. Merelo; Federico Liberatore; Antonio Fernández Ares; Rubén Jesús García; Zeineb Chelly; Carlos Cotta; Nuria Rico; Antonio M. Mora; Pablo García-Sánchez; Alberto Paolo Tonda; Paloma de las Cuevas; Pedro A. Castillo
In many optimization processes, the fitness or the considered measure of goodness for the candidate solutions presents uncertainty, that is, it yields different values when repeatedly measured, due to the nature of the evaluation process or the solution itself. This happens quite often in the context of computational intelligence in games, when either bots behave stochastically, or the target game possesses intrinsic random elements, but it shows up also in other problems as long as there is some random component. Thus, it is important to examine the statistical behavior of repeated measurements of performance and, more specifically, the statistical distribution that better fits them. This work analyzes four different problems related to computational intelligence in videogames, where Evolutionary Computation methods have been applied, and the evaluation of each individual is performed by playing the game, and compare them to other problem, neural network optimization, where performance is also a statistical variable. In order to find possible patterns in the statistical behavior of the variables, we track the main features of its distributions, skewness and kurtosis. Contrary to the usual assumption in this kind of problems, we prove that, in general, the values of two features imply that fitness values do not follow a normal distribution; they do present a certain common behavior that changes as evolution proceeds, getting in some cases closer to the standard distribution and in others drifting apart from it. A clear behavior in this case cannot be concluded, other than the fact that the statistical distribution that fitness variables follow is affected by selection in different directions, that parameters vary in a single generation across them, and that, in general, this kind of behavior will have to be taken into account to adequately address uncertainty in fitness in evolutionary algorithms.