Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michèle Sebag is active.

Publication


Featured researches published by Michèle Sebag.


Communications of The ACM | 2012

The grand challenge of computer Go: Monte Carlo tree search and extensions

Sylvain Gelly; Levente Kocsis; Marc Schoenauer; Michèle Sebag; David Silver; Csaba Szepesvári; Olivier Teytaud

The ancient oriental game of Go has long been considered a grand challenge for artificial intelligence. For decades, computer Go has defied the classical methods in game tree search that worked so successfully for chess and checkers. However, recent play in computer Go has been transformed by a new paradigm for tree search based on Monte-Carlo methods. Programs based on Monte-Carlo tree search now play at human-master levels and are beginning to challenge top professional players. In this paper, we describe the leading algorithms for Monte-Carlo tree search and explain how they have advanced the state of the art in computer Go.


inductive logic programming | 1997

Distance Induction in First Order Logic

Michèle Sebag

A distance on the problem domain allows one to tackle some typical goals of machine learning, e.g. classification or conceptual clustering, via robust data analysis algorithms (e.g. k-nearest neighbors or k-means).


international conference on data mining | 2004

Dryade: a new approach for discovering closed frequent trees in heterogeneous tree databases

Alexandre Termier; Marie-Christine Rousset; Michèle Sebag

In this paper we present a novel algorithm for discovering tree patterns in a tree database. This algorithm uses a relaxed tree inclusion definition, making the problem more complex (checking tree inclusion is NP-complete), but allowing to mine highly heterogeneous databases. To obtain good performances, our DRYADE algorithm, discovers only closed frequent tree patterns.


ieee international symposium on workload characterization | 2012

BenchNN: On the broad potential application scope of hardware neural network accelerators

Tianshi Chen; Yunji Chen; Marc Duranton; Qi Guo; Atif Hashmi; Mikko H. Lipasti; Andrew Nere; Shi Qiu; Michèle Sebag; Olivier Temam

Recent technology trends have indicated that, although device sizes will continue to scale as they have in the past, supply voltage scaling has ended. As a result, future chips can no longer rely on simply increasing the operational core count to improve performance without surpassing a reasonable power budget. Alternatively, allocating die area towards accelerators targeting an application, or an application domain, appears quite promising, and this paper makes an argument for a neural network hardware accelerator. After being hyped in the 1990s, then fading away for almost two decades, there is a surge of interest in hardware neural networks because of their energy and fault-tolerance properties. At the same time, the emergence of high-performance applications like Recognition, Mining, and Synthesis (RMS) suggest that the potential application scope of a hardware neural network accelerator would be broad. In this paper, we want to highlight that a hardware neural network accelerator is indeed compatible with many of the emerging high-performance workloads, currently accepted as benchmarks for high-performance micro-architectures. For that purpose, we develop and evaluate software neural network implementations of 5 (out of 12) RMS applications from the PARSEC Benchmark Suite. Our results show that neural network implementations can achieve competitive results, with respect to application-specific quality metrics, on these 5 RMS applications.


european conference on machine learning | 2008

Data Streaming with Affinity Propagation

Xiangliang Zhang; Cyril Furtlehner; Michèle Sebag

This paper proposed StrAP (Streaming AP), extending Affinity Propagation (AP) to data steaming. AP, a new clustering algorithm, extracts the data items, or exemplars, that best represent the dataset using a message passing method. Several steps are made to build StrAP . The first one (Weighted AP) extends AP to weighted items with no loss of generality. The second one (Hierarchical WAP) is concerned with reducing the quadratic AP complexity, by applying AP on data subsets and further applying Weighted AP on the exemplars extracted from all subsets. Finally StrAP extends Hierarchical WAP to deal with changes in the data distribution. Experiments on artificial datasets, on the Intrusion Detection benchmark (KDD99) and on a real-world problem, clustering the stream of jobs submitted to the EGEE grid system, provide a comparative validation of the approach.


Machine Learning | 2004

Fast Theta-Subsumption with Constraint Satisfaction Algorithms

Jérôme Maloberti; Michèle Sebag

Relational learning and Inductive Logic Programming (ILP) commonly use as covering test the θ-subsumption test defined by Plotkin. Based on a reformulation of θ-subsumption as a binary constraint satisfaction problem, this paper describes a novel θ-subsumption algorithm named Django,1 which combines well-known CSP procedures and θ-subsumption-specific data structures. Django is validated using the stochastic complexity framework developed in CSPs, and imported in ILP by Giordana et Saitta. Principled and extensive experiments within this framework show that Django improves on earlier θ-subsumption algorithms by several orders of magnitude, and that different procedures are better at different regions of the stochastic complexity landscape. These experiments allow for building a control layer over Django, termed Meta-Django, which determines the best procedures to use depending on the order parameters of the θ-subsumption problem instance. The performance gains and good scalability of Django and Meta-Django are finally demonstrated on a real-world ILP task (emulating the search for frequent clauses in the mutagenesis domain) though the smaller size of the problems results in smaller gain factors (ranging from 2.5 to 30).


parallel problem solving from nature | 1994

Controlling Crossover through Inductive Learning

Michèle Sebag; Marc Schoenauer

Crossover may achieve the fast combination of performant building blocks; but as a counterpart, crossover may as well break a newly discovered building block. We propose to use inductive learning to control such disruptive effects of crossover. The idea is to periodically gather some examples of crossovers, labelled as ”good” or ”bad” crossovers according to their effects on the current population. From these examples, inductive learning builds rules characterizing the crossover quality. This ruleset then enables to control further evolution: crossovers classified ”bad” according to the ruleset are refused. Some experimentations on the Royal Road problem are discussed.


genetic and evolutionary computation conference | 2010

A mono surrogate for multiobjective optimization

Ilya Loshchilov; Marc Schoenauer; Michèle Sebag

Most surrogate approaches to multi-objective optimization build a surrogate model for each objective. These surrogates can be used inside a classical Evolutionary Multiobjective Optimization Algorithm (EMOA) in lieu of the actual objectives, without modifying the underlying EMOA; or to filter out points that the models predict to be uninteresting. In contrast, the proposed approach aims at building a global surrogate model defined on the decision space and tightly characterizing the current Pareto set and the dominated region, in order to speed up the evolution progress toward the true Pareto set. This surrogate model is specified by combining a One-class Support Vector Machine (SVMs) to characterize the dominated points, and a Regression SVM to clamp the Pareto front on a single value. The resulting surrogate model is then used within state-of-the-art EMOAs to pre-screen the individuals generated by application of standard variation operators. Empirical validation on classical MOO benchmark problems shows a significant reduction of the number of evaluations of the actual objective functions.


genetic and evolutionary computation conference | 2013

Bi-population CMA-ES agorithms with surrogate models and line searches

Ilya Loshchilov; Marc Schoenauer; Michèle Sebag

In this paper, three extensions of the BI-population Covariance Matrix Adaptation Evolution Strategy with weighted active covariance matrix update (BIPOP-aCMA-ES) are investigated. First, to address expensive optimization, we benchmark a recently proposed extension of the self-adaptive surrogate-assisted CMA-ES which benefits from more intensive surrogate model exploitation (BIPOP-saACM-k). Second, to address separable optimization, we propose a hybrid of BIPOP-aCMA-ES and STEP algorithm with coordinate-wise line search (BIPOP-aCMA-STEP). Third, we propose HCMA, a hybrid of BIPOP-saACM-k, STEP and NEWUOA to benefit both from surrogate models and line searches. All algorithms were tested on the noiseless BBOB testbed using restarts till a total number of function evaluations of 106n was reached, where n is the dimension of the function search space.n The comparison shows that BIPOP-saACM-k outperforms its predecessor BIPOP-saACM up to a factor of 2 on ill-conditioned problems, while BIPOP-aCMA-STEP outperforms the original BIPOP-based algorithms on separable functions. The hybrid HCMA algorithm demonstrates the best overall performance compared to the best algorithms of the BBOB-2009, BBOB-2010 and BBOB-2012 when running for more than 100n function evaluations.


knowledge discovery and data mining | 2009

Toward autonomic grids: analyzing the job flow with affinity streaming

Xiangliang Zhang; Cyril Furtlehner; Julien Perez; Cécile Germain-Renaud; Michèle Sebag

The Affinity Propagation (AP) clustering algorithm proposed by Frey and Dueck (2007) provides an understandable, nearly optimal summary of a dataset, albeit with quadratic computational complexity. This paper, motivated by Autonomic Computing, extends AP to the data streaming framework. Firstly a hierarchical strategy is used to reduce the complexity to O(N1+ε); the distortion loss incurred is analyzed in relation with the dimension of the data items. Secondly, a coupling with a change detection test is used to cope with non-stationary data distribution, and rebuild the model as needed. The presented approach StrAP is applied to the stream of jobs submitted to the EGEE Grid, providing an understandable description of the job flow and enabling the system administrator to spot online some sources of failures.

Collaboration


Dive into the Michèle Sebag's collaboration.

Top Co-Authors

Avatar

Olivier Teytaud

National University of Tainan

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiangliang Zhang

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Cédric Hartland

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge