Álvaro Fialho
Microsoft
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Álvaro Fialho.
genetic and evolutionary computation conference | 2008
Luis DaCosta; Álvaro Fialho; Marc Schoenauer; Michèle Sebag
An important step toward self-tuning Evolutionary Algorithms is to design efficient Adaptive Operator Selection procedures. Such a procedure is made of two main components: a credit assignment mechanism, that computes a reward for each operator at hand based on some characteristics of the past offspring; and an adaptation rule, that modifies the selection mechanism based on the rewards of the different operators. This paper is concerned with the latter, and proposes a new approach for it based on the well-known Multi-Armed Bandit paradigm. However, because the basic Multi-Armed Bandit methods have been developed for static frameworks, a specific Dynamic Multi-Armed Bandit algorithm is proposed, that hybridizes an optimal Multi-Armed Bandit algorithm with the statistical Page-Hinkley test, which enforces the efficient detection of changes in time series. This original Operator Selection procedure is then compared to the state-of-the-art rules known as Probability Matching and Adaptive Pursuit on several artificial scenarios, after a careful sensitivity analysis of all methods. The Dynamic Multi-Armed Bandit method is found to outperform the other methods on a scenario from the literature, while on another scenario, the basic Multi-Armed Bandit performs best.
Annals of Mathematics and Artificial Intelligence | 2010
Álvaro Fialho; Luis Da Costa; Marc Schoenauer; Michèle Sebag
Several techniques have been proposed to tackle the Adaptive Operator Selection (AOS) issue in Evolutionary Algorithms. Some recent proposals are based on the Multi-armed Bandit (MAB) paradigm: each operator is viewed as one arm of a MAB problem, and the rewards are mainly based on the fitness improvement brought by the corresponding operator to the individual it is applied to. However, the AOS problem is dynamic, whereas standard MAB algorithms are known to optimally solve the exploitation versus exploration trade-off in static settings. An original dynamic variant of the standard MAB Upper Confidence Bound algorithm is proposed here, using a sliding time window to compute both its exploitation and exploration terms. In order to perform sound comparisons between AOS algorithms, artificial scenarios have been proposed in the literature. They are extended here toward smoother transitions between different reward settings. The resulting original testbed also includes a real evolutionary algorithm that is applied to the well-known Royal Road problem. It is used here to perform a thorough analysis of the behavior of AOS algorithms, to assess their sensitivity with respect to their own hyper-parameters, and to propose a sound comparison of their performances.
parallel problem solving from nature | 2008
Álvaro Fialho; Luis Da Costa; Marc Schoenauer; Michèle Sebag
Credit Assignment is an important ingredient of several proposals that have been made for Adaptive Operator Selection. Instead of the average fitness improvement of newborn offspring, this paper proposes to use some empirical order statistics of those improvements, arguing that rare but highly beneficial jumps matter as much or more than frequent but small improvements. An extreme value based Credit Assignment is thus proposed, rewarding each operator with the best fitness improvement observed in a sliding window for this operator. This mechanism, combined with existing Adaptive Operator Selection rules, is investigated in an EC-like setting. First results show that the proposed method allows both the Adaptive Pursuitand the Dynamic Multi-Armed Banditselection rules to actually track the best operators along evolution.
congress on evolutionary computation | 2009
Jorge Maturana; Álvaro Fialho; Frédéric Saubion; Marc Schoenauer; Michèle Sebag
The goal of Adaptive Operator Selection is the on-line control of the choice of variation operators within Evolutionary Algorithms. The control process is based on two main components, the credit assignment, that defines the reward that will be used to evaluate the quality of an operator after it has been applied, and the operator selection mechanism, that selects one operator based on some operators qualities. Two previously developed Adaptive Operator Selection methods are combined here: Compass evaluates the performance of operators by considering not only the fitness improvements from parent to offspring, but also the way they modify the diversity of the population, and their execution time; Dynamic Multi-Armed Bandit proposes a selection strategy based on the well-known UCB algorithm, achieving a compromise between exploitation and exploration, while nevertheless quickly adapting to changes. Tests with the proposed method, called ExCoDyMAB, are carried out using several hard instances of the Satisfiability problem (SAT). Results show the good synergetic effect of combining both approaches.
genetic and evolutionary computation conference | 2010
Wenyin Gong; Álvaro Fialho; Zhihua Cai
Differential evolution (DE) is a simple yet powerful evolutionary algorithm for global numerical optimization. Different strategies have been proposed for the offspring generation; but the selection of which of them should be applied is critical for the DE performance, besides being problem-dependent. In this paper, the probability matching technique is employed in DE to autonomously select the most suitable strategy while solving the problem. Four credit assignment methods, that update the known performance of each strategy based on the relative fitness improvement achieved by its recent applications, are analyzed. To evaluate the performance of our approach, thirteen widely used benchmark functions are used. Experimental results confirm that our approach is able to adaptively choose the suitable strategy for different problems. Compared to classical DE algorithms and to a recently proposed adaptive scheme (SaDE), it obtains better results in most of the functions, in terms of the quality of the final results and convergence speed.
genetic and evolutionary computation conference | 2010
Álvaro Fialho; Marc Schoenauer; Michèle Sebag
Adaptive Operator Selection (AOS) turns the impacts of the applications of variation operators into Operator Selection through a Credit Assignment mechanism. However, most Credit Assignment schemes make direct use of the fitness gain between parent and offspring. A first issue is that the Operator Selection technique that uses such kind of Credit Assignment is likely to be highly dependent on the a priori unknown bounds of the fitness function. Additionally, these bounds are likely to change along evolution, as fitness gains tend to get smaller as convergence occurs. Furthermore, and maybe more importantly, a fitness-based credit assignment forbid any invariance by monotonous transformation of the fitness, what is a usual source of robustness for comparison-based Evolutionary Algorithms. In this context, this paper proposes two new Credit Assignment mechanisms, one inspired by the Area Under the Curve paradigm, and the other close to the Sum of Ranks. Using fitness improvement as raw reward, and directly coupled to a Multi-Armed Bandit Operator Selection Rule, the resulting AOS obtain very good performances on both the OneMax problem and some artificial scenarios, while demonstrating their robustness with respect to hyper-parameter and fitness transformations. Furthermore, using fitness ranks as raw reward results in a fully comparison-based AOS with reasonable performances.
genetic and evolutionary computation conference | 2009
Álvaro Fialho; Marc Schoenauer; Michèle Sebag
One of the choices that most affect the performance of Evolutionary Algorithms is the selection of the variation operators that are efficient to solve the problem at hand. This work presents an empirical analysis of different Adaptive Operator Selection (AOS) methods, i.e., techniques that automatically select the operator to be applied among the available ones, while searching for the solution. Four previously published operator selection rules are combined to four different credit assignment mechanisms. These 16 AOS combinations are analyzed and compared in the light of two well-known benchmark problems in Evolutionary Computation, the Royal Road and the Long K-Path.
genetic and evolutionary computation conference | 2010
Álvaro Fialho; Marc Schoenauer; Michèle Sebag
The choice of which of the available strategies should be used within the Differential Evolution algorithm for a given problem is not trivial, besides being problem-dependent and very sensitive with relation to the algorithm performance. This decision can be made in an autonomous way, by the use of the Adaptive Strategy Selection paradigm, that continuously selects which strategy should be used for the next offspring generation, based on the performance achieved by each of the available ones on the current optimization process, i.e., while solving the problem. In this paper, we use the BBOB-2010 noiseless benchmarking suite to better empirically validate a comparison-based technique recently proposed to do so, the Fitness-based Area-Under-Curve Bandit [4], referred to as F-AUC-Bandit. It is compared with another recently proposed approach that uses Probability Matching technique based on the relative fitness improvements, referred to as PM-AdapSS-DE [7].
Autonomous Search | 2011
Jorge Maturana; Álvaro Fialho; Frédéric Saubion; Marc Schoenauer; Frédéric Lardeux; Michèle Sebag
One of the settings that most affect the performance of Evolutionary Algorithms is the selection of the variation operators that are efficient to solve the problem at hand. The control of these operators can be handled in an autonomous way, while solving the problem, at two different levels: at the structural level, when deciding which operators should be part of the algorithmic framework, referred to as Adaptive Operator Management (AOM); and at the behavioral level, when selecting which of the available operators should be applied at a given time instant, called as Adaptive Operator Selection (AOS). Both controllers guide their choices based on a common knowledge about the recent performance of each operator. In this chapter, we present methods for these two complementary aspects of operator control, the ExCoDyMAB AOS and the Blacksmith AOM, providing case studies to analyze them in order to highlight the major issues that should be considered for the design of more autonomous Evolutionary Algorithms.
parallel problem solving from nature | 2010
Álvaro Fialho; Raymond Ros; Marc Schoenauer; Michèle Sebag
Differential Evolution is a popular powerful optimization algorithm for continuous problems. Part of its efficiency comes from the availability of several mutation strategies that can (and must) be chosen in a problem-dependent way. However, such flexibility also makes DE difficult to be automatically used in a new context. F-AUC-Bandit is a comparison-based Adaptive Operator Selection method that has been proposed in the GA framework. It is used here for the on-line control of DE mutation strategy, thus preserving DE invariance w.r.t. monotonous transformations of the objective function. The approach is comparatively assessed on the BBOB test suite, demonstrating significant improvement on baseline and other Adaptive Strategy Selection approaches, while presenting a very low sensitivity to hyper-parameter setting.