Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Liva Ralaivola is active.

Publication


Featured researches published by Liva Ralaivola.


international conference on artificial neural networks | 2009

Learning SVMs from Sloppily Labeled Data

Guillaume Stempfel; Liva Ralaivola

This paper proposes a modelling of Support Vector Machine (SVM) learning to address the problem of learning with sloppy labels . In binary classification, learning with sloppy labels is the situation where a learner is provided with labelled data, where the observed labels of each class are possibly noisy (flipped) version of their true class and where the probability of flipping a label y to ---y only depends on y . The noise probability is therefore constant and uniform within each class: learning with positive and unlabeled data is for instance a motivating example for this model. In order to learn with sloppy labels, we propose SloppySvm , an SVM algorithm that minimizes a tailored nonconvex functional that is shown to be a uniform estimate of the noise-free SVM functional. Several experiments validate the soundness of our approach.


international conference on machine learning | 2009

Grammatical inference as a principal component analysis problem

Raphaël Bailly; François Denis; Liva Ralaivola

One of the main problems in probabilistic grammatical inference consists in inferring a stochastic language, i.e. a probability distribution, in some class of probabilistic models, from a sample of strings independently drawn according to a fixed unknown target distribution p. Here, we consider the class of rational stochastic languages composed of stochastic languages that can be computed by multiplicity automata, which can be viewed as a generalization of probabilistic automata. Rational stochastic languages p have a useful algebraic characterization: all the mappings up: v → p(uv) lie in a finite dimensional vector subspace Vp* of the vector space ℝ ⟨⟨Σ⟩⟩ composed of all real-valued functions defined over Σ*. Hence, a first step in the grammatical inference process can consist in identifying the subspace Vp*. In this paper, we study the possibility of using Principal Component Analysis to achieve this task. We provide an inference algorithm which computes an estimate of this space and then build a multiplicity automaton which computes an estimate of the target distribution. We prove some theoretical properties of this algorithm and we provide results from numerical simulations that confirm the relevance of our approach.


international conference on machine learning | 2006

CN = CPCN

Liva Ralaivola; François Denis; Christophe Magnan

We address the issue of the learnability of concept classes under three classification noise models in the probably approximately correct framework. After introducing the Class-Conditional Classification Noise (CCCN) model, we investigate the problem of the learnability of concept classes under this particular setting and we show that concept classes that are learnable under the well-known uniform classification noise (CN) setting are also CCCN-learnable, which gives CN = CCCN. We then use this result to prove the equality between the set of concept classes that are CN-learnable and the set of concept classes that are learnable in the Constant Partition Classification Noise (CPCN) setting, or, in other words, we show that CN = CPCN.


PLOS ONE | 2014

Graph-Based Inter-Subject Pattern Analysis of fMRI Data

Sylvain Takerkart; Guillaume Auzias; Bertrand Thirion; Liva Ralaivola

In brain imaging, solving learning problems in multi-subjects settings is difficult because of the differences that exist across individuals. Here we introduce a novel classification framework based on group-invariant graphical representations, allowing to overcome the inter-subject variability present in functional magnetic resonance imaging (fMRI) data and to perform multivariate pattern analysis across subjects. Our contribution is twofold: first, we propose an unsupervised representation learning scheme that encodes all relevant characteristics of distributed fMRI patterns into attributed graphs; second, we introduce a custom-designed graph kernel that exploits all these characteristics and makes it possible to perform supervised learning (here, classification) directly in graph space. The well-foundedness of our technique and the robustness of the performance to the parameter setting are demonstrated through inter-subject classification experiments conducted on both artificial data and a real fMRI experiment aimed at characterizing local cortical representations. Our results show that our framework produces accurate inter-subject predictions and that it outperforms a wide range of state-of-the-art vector- and parcel-based classification methods. Moreover, the genericity of our method makes it is easily adaptable to a wide range of potential applications. The dataset used in this study and an implementation of our framework are available at http://dx.doi.org/10.6084/m9.figshare.1086317.


Third International Workshop Machine Learning in Medical Imaging - MLMI 2012 (Held in Conjunction with MICCAI 2012) | 2012

Graph-based inter-subject classification of local fMRI patterns

Sylvain Takerkart; Guillaume Auzias; Bertrand Thirion; Daniele Schön; Liva Ralaivola

Classification of medical images in multi-subjects settings is a difficult challenge due to the variability that exists between individuals. Here we introduce a new graph-based framework designed to deal with inter-subject functional variability present in fMRI data. A graphical model is constructed to encode the functional, geometric and structural properties of local activation patterns. We then design a specific graph kernel, allowing to conduct SVM classification in graph space. Experiments conducted in an inter-subject classification task of patterns recorded in the auditory cortex show that it is the only approach to perform above chance level, among a wide range of tested methods.


IEEE Transactions on Signal Processing | 2015

Dynamic Screening: Accelerating First-Order Algorithms for the Lasso and Group-Lasso

Antoine Bonnefoy; Valentin Emiya; Liva Ralaivola; Rémi Gribonval

Recent computational strategies based on screening tests have been proposed to accelerate algorithms addressing penalized sparse regression problems such as the Lasso. Such approaches build upon the idea that it is worth dedicating some small computational effort to locate inactive atoms and remove them from the dictionary in a preprocessing stage so that the regression algorithm working with a smaller dictionary will then converge faster to the solution of the initial problem. We believe that there is an even more efficient way to screen the dictionary and obtain a greater acceleration: inside each iteration of the regression algorithm, one may take advantage of the algorithm computations to obtain a new screening test for free with increasing screening effects along the iterations. The dictionary is henceforth dynamically screened instead of being screened statically, once and for all, before the first iteration. We formalize this dynamic screening principle in a general algorithmic scheme and apply it by embedding inside a number of first-order algorithms adapted existing screening tests to solve the Lasso or new screening tests to solve the Group-Lasso. Computational gains are assessed in a large set of experiments on synthetic data as well as real-world sounds and images. They show both the screening efficiency and the gain in terms of running times.


international workshop on pattern recognition in neuroimaging | 2014

Multiple subject learning for inter-subject prediction

Sylvain Takerkart; Liva Ralaivola

Multi-voxel pattern analysis has become an important tool for neuroimaging data analysis by allowing to predict a behavioral variable from the imaging patterns. However, standard models do not take into account the differences that can exist between subjects, so that they perform poorly in the inter-subject prediction task. We here introduce a model called Multiple Subject Learning (MSL) that is designed to effectively combine the information provided by fMRI data from several subjects; in a first stage, a weighting of single-subject kernels is learnt using multiple kernel learning to produce a classifier; then, a data shuffling procedure allows to build ensembles of such classifiers, which are then combined by a majority vote. We show that MSL outperforms other models in the inter-subject prediction task and we discuss the empirical behavior of this new model.


intelligent data analysis | 2015

On Binary Reduction of Large-scale Multiclass Classification Problems

Bikash Joshi; Massih-Reza Amini; Ioannis Partalas; Liva Ralaivola; Nicolas Usunier; Eric Gaussier

In the context of large-scale problems, traditional multiclass classification approaches have to deal with class imbalancement and complexity issues which make them inoperative in some extreme cases. In this paper we study a transformation that reduces the initial multiclass classification of examples into a binary classification of pairs of examples and classes. We present generalization error bounds that exhibit the interdependency between the pairs of examples and which recover known results on binary classification with i.i.d. data. We show the efficiency of the deduced algorithm compared to state-of-the-art multiclass classification strategies on two large-scale document collections especially in the interesting case where the number of classes becomes very large.


ieee automatic speech recognition and understanding workshop | 2011

Applying Multiclass Bandit algorithms to call-type classification

Liva Ralaivola; Benoit Favre; Pierre Gotab; Frédéric Béchet; Géraldine Damnati

We analyze the problem of call-type classification using data that is weakly labelled. The training data is not systematically annotated, but we consider we have a weak or lazy oracle able to answer the question “Is sample x of class q?” by a simple ‘yes’ or ‘no’ answer. This situation of learning might be encountered in many real-world problems where the cost of labelling data is very high. We prove that it is possible to learn linear classifiers in this setting, by estimating adequate expectations inspired by the Multiclass Bandit paradgim. We propose a learning strategy that builds on Kesslers construction to learn multiclass perceptrons. We test our learning procedure against two real-world datasets from spoken langage understanding and provide compelling results.


international symposium on neural networks | 2015

Reward-based online learning in non-stationary environments: Adapting a P300-speller with a “backspace” key

Emmanuel Daucé; Timothée Proix; Liva Ralaivola

We adapt a policy gradient approach to the problem of reward-based online learning of a non-invasive EEG-based “P300”-speller. We first clarify the nature of the P300-speller classification problem and present a general regularized gradient ascent formula. We then show that when the reward is immediate and binary (namely “bad response” or “good response”), each update is expected to improve the classifier accuracy, whether the actual response is correct or not. We also estimate the robustness of the method to occasional mistaken rewards, i.e. show that the learning efficacy may only linearly decrease with the rate of invalid rewards. The effectiveness of our approach is tested in a series of simulations reproducing the conditions of real experiments. We show in a first experiment that a systematic improvement of the spelling rate is obtained for all subjects in the absence of initial calibration. In a second experiment, we consider the case of the online recovery that is expected to follow failed electrodes. Combined with a specific failure detection algorithm, the spelling error information (typically contained in a “backspace” hit) is shown useful for the policy gradient to adapt the P300 classifier to the new situation, provided the feedback is reliable enough (namely having a reliability greater than 70%).

Collaboration


Dive into the Liva Ralaivola's collaboration.

Top Co-Authors

Avatar

Emilie Morvant

Aix-Marseille University

View shared research outputs
Top Co-Authors

Avatar

Valentin Emiya

Aix-Marseille University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Thomas Peel

Aix-Marseille University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Julien Audiffren

École normale supérieure de Cachan

View shared research outputs
Researchain Logo
Decentralizing Knowledge