Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marc Schoenauer is active.

Publication


Featured researches published by Marc Schoenauer.


genetic and evolutionary computation conference | 2008

Adaptive operator selection with dynamic multi-armed bandits

Luis DaCosta; Álvaro Fialho; Marc Schoenauer; Michèle Sebag

An important step toward self-tuning Evolutionary Algorithms is to design efficient Adaptive Operator Selection procedures. Such a procedure is made of two main components: a credit assignment mechanism, that computes a reward for each operator at hand based on some characteristics of the past offspring; and an adaptation rule, that modifies the selection mechanism based on the rewards of the different operators. This paper is concerned with the latter, and proposes a new approach for it based on the well-known Multi-Armed Bandit paradigm. However, because the basic Multi-Armed Bandit methods have been developed for static frameworks, a specific Dynamic Multi-Armed Bandit algorithm is proposed, that hybridizes an optimal Multi-Armed Bandit algorithm with the statistical Page-Hinkley test, which enforces the efficient detection of changes in time series. This original Operator Selection procedure is then compared to the state-of-the-art rules known as Probability Matching and Adaptive Pursuit on several artificial scenarios, after a careful sensitivity analysis of all methods. The Dynamic Multi-Armed Bandit method is found to outperform the other methods on a scenario from the literature, while on another scenario, the basic Multi-Armed Bandit performs best.


Annals of Mathematics and Artificial Intelligence | 2010

Analyzing bandit-based adaptive operator selection mechanisms

Álvaro Fialho; Luis Da Costa; Marc Schoenauer; Michèle Sebag

Several techniques have been proposed to tackle the Adaptive Operator Selection (AOS) issue in Evolutionary Algorithms. Some recent proposals are based on the Multi-armed Bandit (MAB) paradigm: each operator is viewed as one arm of a MAB problem, and the rewards are mainly based on the fitness improvement brought by the corresponding operator to the individual it is applied to. However, the AOS problem is dynamic, whereas standard MAB algorithms are known to optimally solve the exploitation versus exploration trade-off in static settings. An original dynamic variant of the standard MAB Upper Confidence Bound algorithm is proposed here, using a sliding time window to compute both its exploitation and exploration terms. In order to perform sound comparisons between AOS algorithms, artificial scenarios have been proposed in the literature. They are extended here toward smoother transitions between different reward settings. The resulting original testbed also includes a real evolutionary algorithm that is applied to the well-known Royal Road problem. It is used here to perform a thorough analysis of the behavior of AOS algorithms, to assess their sensitivity with respect to their own hyper-parameters, and to propose a sound comparison of their performances.


parallel problem solving from nature | 2008

Extreme Value Based Adaptive Operator Selection

Álvaro Fialho; Luis Da Costa; Marc Schoenauer; Michèle Sebag

Credit Assignment is an important ingredient of several proposals that have been made for Adaptive Operator Selection. Instead of the average fitness improvement of newborn offspring, this paper proposes to use some empirical order statistics of those improvements, arguing that rare but highly beneficial jumps matter as much or more than frequent but small improvements. An extreme value based Credit Assignment is thus proposed, rewarding each operator with the best fitness improvement observed in a sliding window for this operator. This mechanism, combined with existing Adaptive Operator Selection rules, is investigated in an EC-like setting. First results show that the proposed method allows both the Adaptive Pursuitand the Dynamic Multi-Armed Banditselection rules to actually track the best operators along evolution.


congress on evolutionary computation | 2009

Extreme compass and Dynamic Multi-Armed Bandits for Adaptive Operator Selection

Jorge Maturana; Álvaro Fialho; Frédéric Saubion; Marc Schoenauer; Michèle Sebag

The goal of Adaptive Operator Selection is the on-line control of the choice of variation operators within Evolutionary Algorithms. The control process is based on two main components, the credit assignment, that defines the reward that will be used to evaluate the quality of an operator after it has been applied, and the operator selection mechanism, that selects one operator based on some operators qualities. Two previously developed Adaptive Operator Selection methods are combined here: Compass evaluates the performance of operators by considering not only the fitness improvements from parent to offspring, but also the way they modify the diversity of the population, and their execution time; Dynamic Multi-Armed Bandit proposes a selection strategy based on the well-known UCB algorithm, achieving a compromise between exploitation and exploration, while nevertheless quickly adapting to changes. Tests with the proposed method, called ExCoDyMAB, are carried out using several hard instances of the Satisfiability problem (SAT). Results show the good synergetic effect of combining both approaches.


genetic and evolutionary computation conference | 2010

Toward comparison-based adaptive operator selection

Álvaro Fialho; Marc Schoenauer; Michèle Sebag

Adaptive Operator Selection (AOS) turns the impacts of the applications of variation operators into Operator Selection through a Credit Assignment mechanism. However, most Credit Assignment schemes make direct use of the fitness gain between parent and offspring. A first issue is that the Operator Selection technique that uses such kind of Credit Assignment is likely to be highly dependent on the a priori unknown bounds of the fitness function. Additionally, these bounds are likely to change along evolution, as fitness gains tend to get smaller as convergence occurs. Furthermore, and maybe more importantly, a fitness-based credit assignment forbid any invariance by monotonous transformation of the fitness, what is a usual source of robustness for comparison-based Evolutionary Algorithms. In this context, this paper proposes two new Credit Assignment mechanisms, one inspired by the Area Under the Curve paradigm, and the other close to the Sum of Ranks. Using fitness improvement as raw reward, and directly coupled to a Multi-Armed Bandit Operator Selection Rule, the resulting AOS obtain very good performances on both the OneMax problem and some artificial scenarios, while demonstrating their robustness with respect to hyper-parameter and fitness transformations. Furthermore, using fitness ranks as raw reward results in a fully comparison-based AOS with reasonable performances.


learning and intelligent optimization | 2009

Dynamic Multi-Armed Bandits and Extreme Value-Based Rewards for Adaptive Operator Selection in Evolutionary Algorithms

Álvaro Fialho; Luis Da Costa; Marc Schoenauer; Michèle Sebag

The performance of many efficient algorithms critically depends on the tuning of their parameters, which on turn depends on the problem at hand. For example, the performance of Evolutionary Algorithms critically depends on the judicious setting of the operator rates. The Adaptive Operator Selection (AOS) heuristic that is proposed here rewards each operator based on the extreme value of the fitness improvement lately incurred by this operator, and uses a Multi-Armed Bandit (MAB) selection process based on those rewards to choose which operator to apply next. This Extreme-based Multi-Armed Bandit approach is experimentally validated against the Average-based MAB method, and is shown to outperform previously published methods, whether using a classical Average-based rewarding technique or the same Extreme-based mechanism. The validation test suite includes the easy One-Max problem and a family of hard problems known as Long k-paths.


genetic and evolutionary computation conference | 2009

Analysis of adaptive operator selection techniques on the royal road and long k-path problems

Álvaro Fialho; Marc Schoenauer; Michèle Sebag

One of the choices that most affect the performance of Evolutionary Algorithms is the selection of the variation operators that are efficient to solve the problem at hand. This work presents an empirical analysis of different Adaptive Operator Selection (AOS) methods, i.e., techniques that automatically select the operator to be applied among the available ones, while searching for the solution. Four previously published operator selection rules are combined to four different credit assignment mechanisms. These 16 AOS combinations are analyzed and compared in the light of two well-known benchmark problems in Evolutionary Computation, the Royal Road and the Long K-Path.


genetic and evolutionary computation conference | 2011

Optimizing architectural and structural aspects of buildings towards higher energy efficiency

Álvaro Fialho; Youssef Hamadi; Marc Schoenauer

In this on-going work, we aim at contributing to the issue of energy consumption by proposing tools to automatically define some aspects of the architectural and structural design of buildings. Our framework starts with a building design, and automatically optimizes it, providing to the architect many variations that minimize, in different ways, both energy consumption and construction costs. The optimization stage is done by the combination of an energy consumption simulation program, EnergyPlus, with a state-of-the-art multi-objective evolutionary algorithm, Hype. The latter explores the design search space, automatically generating new feasible design solutions, which are then evaluated by the energy simulation software. Preliminary results are presented, in which the proposed framework is used to optimize the orientation angle of a given commercial building and the materials used for the thermal insulation of its walls.


genetic and evolutionary computation conference | 2010

Fitness-AUC bandit adaptive strategy selection vs. the probability matching one within differential evolution: an empirical comparison on the bbob-2010 noiseless testbed

Álvaro Fialho; Marc Schoenauer; Michèle Sebag

The choice of which of the available strategies should be used within the Differential Evolution algorithm for a given problem is not trivial, besides being problem-dependent and very sensitive with relation to the algorithm performance. This decision can be made in an autonomous way, by the use of the Adaptive Strategy Selection paradigm, that continuously selects which strategy should be used for the next offspring generation, based on the performance achieved by each of the available ones on the current optimization process, i.e., while solving the problem. In this paper, we use the BBOB-2010 noiseless benchmarking suite to better empirically validate a comparison-based technique recently proposed to do so, the Fitness-based Area-Under-Curve Bandit [4], referred to as F-AUC-Bandit. It is compared with another recently proposed approach that uses Probability Matching technique based on the relative fitness improvements, referred to as PM-AdapSS-DE [7].


Autonomous Search | 2011

Adaptive Operator Selection and Management in Evolutionary Algorithms

Jorge Maturana; Álvaro Fialho; Frédéric Saubion; Marc Schoenauer; Frédéric Lardeux; Michèle Sebag

One of the settings that most affect the performance of Evolutionary Algorithms is the selection of the variation operators that are efficient to solve the problem at hand. The control of these operators can be handled in an autonomous way, while solving the problem, at two different levels: at the structural level, when deciding which operators should be part of the algorithmic framework, referred to as Adaptive Operator Management (AOM); and at the behavioral level, when selecting which of the available operators should be applied at a given time instant, called as Adaptive Operator Selection (AOS). Both controllers guide their choices based on a common knowledge about the recent performance of each operator. In this chapter, we present methods for these two complementary aspects of operator control, the ExCoDyMAB AOS and the Blacksmith AOM, providing case studies to analyze them in order to highlight the major issues that should be considered for the design of more autonomous Evolutionary Algorithms.

Collaboration


Dive into the Marc Schoenauer's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jorge Maturana

Austral University of Chile

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge