Emilie Morvant
Aix-Marseille University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Emilie Morvant.
Pattern Recognition Letters | 2015
Emilie Morvant
A framework for learning a PAC-Bayes majority vote for domain adaptation is proposed.We generalize the C-bound (for the target votes error) to domain adaptation.We propose an original self-labeling procedure based on the perturbed variation.We design a hyperparameter validation process suitable for our approach.Experiments are promising and show the usefulness of our self-labeling procedure. In machine learning, the domain adaptation problem arrives when the test (target) and the train (source) data are generated from different distributions. A key applied issue is thus the design of algorithms able to generalize on a new distribution, for which we have no label information. We focus on learning classification models defined as a weighted majority vote over a set of real-valued functions. In this context, Germain et?al. 1] have shown that a measure of disagreement between these functions is crucial to control. The core of this measure is a theoretical bound-the C-bound 2]-which involves the disagreement and leads to a well performing majority vote learning algorithm in usual non-adaptative supervised setting: MinCq. In this work, we propose a framework to extend MinCq to a domain adaptation scenario. This procedure takes advantage of the recent perturbed variation divergence between distributions proposed by Harel and Mannor 3]. Justified by a theoretical bound on the target risk of the vote, we provide to MinCq a target sample labeled thanks to a perturbed variation-based self-labeling focused on the regions where the source and target marginals appear similar. We also study the influence of our self-labeling, from which we deduce an original process for tuning the hyperparameters. Finally, our framework called PV-MinCq shows very promising results on a rotation and translation synthetic problem.
international conference on data mining | 2011
Emilie Morvant; Amaury Habrard; Stéphane Ayache
We address the problem of domain adaptation for binary classification which arises when the distributions generating the source learning data and target test data are somewhat different. We consider the challenging case where no target labeled data is available. From a theoretical standpoint, a classifier has better generalization guarantees when the two domain marginal distributions are close. We study a new direction based on a recent framework of Balcan et al. allowing to learn linear classifiers in an explicit projection space based on similarity functions that may be not symmetric and not positive semi-definite. We propose a general method for learning a good classifier on target data with generalization guarantees and we improve its efficiency thanks to an iterative procedure by reweighting the similarity function - compatible with Balcan et al. framework - to move closer the two distributions in a new projection space. Hyper parameters and reweighting quality are controlled by a reverse validation procedure. Our approach is based on a linear programming formulation and shows good adaptation performances with very sparse models. We evaluate it on a synthetic problem and on real image annotation task.
Machine Learning | 2014
Aurélien Bellet; Amaury Habrard; Emilie Morvant; Marc Sebban
Weighted majority votes allow one to combine the output of several classifiers or voters. MinCq is a recent algorithm for optimizing the weight of each voter based on the minimization of a theoretical bound over the risk of the vote with elegant PAC-Bayesian generalization guarantees. However, while it has demonstrated good performance when combining weak classifiers, MinCq cannot make use of the useful a priori knowledge that one may have when using a mixture of weak and strong voters. In this paper, we propose P-MinCq, an extension of MinCq that can incorporate such knowledge in the form of a constraint over the distribution of the weights, along with general proofs of convergence that stand in the sample compression setting for data-dependent voters. The approach is applied to a vote of
european conference on machine learning | 2017
Anil Goyal; Emilie Morvant; Pascal Germain; Massih-Reza Amini
Neurocomputing | 2017
François Laviolette; Emilie Morvant; Liva Ralaivola; Jean-Francis Roy
k
SIMBAD'11 Proceedings of the First international conference on Similarity-based pattern recognition | 2011
Emilie Morvant; Amaury Habrard; Stéphane Ayache
international conference on machine learning | 2013
Pascal Germain; Amaury Habrard; François Laviolette; Emilie Morvant
k-NN classifiers with a specific modeling of the voters’ performance. P-MinCq significantly outperforms the classic
international conference on machine learning | 2012
Emilie Morvant; Sokol Ko o; Liva Ralaivola
arXiv: Machine Learning | 2014
Emilie Morvant; Amaury Habrard; Stéphane Ayache
k
Knowledge and Information Systems | 2012
Emilie Morvant; Amaury Habrard; Stéphane Ayache