Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yves Grandvalet is active.

Publication


Featured researches published by Yves Grandvalet.


international conference on machine learning | 2007

More efficiency in multiple kernel learning

Alain Rakotomamonjy; Francis R. Bach; Stéphane Canu; Yves Grandvalet

An efficient and general multiple kernel learning (MKL) algorithm has been recently proposed by Sonnenburg et al. (2006). This approach has opened new perspectives since it makes the MKL approach tractable for large-scale problems, by iteratively using existing support vector machine code. However, it turns out that this iterative algorithm needs several iterations before converging towards a reasonable solution. In this paper, we address the MKL problem through an adaptive 2-norm regularization formulation. Weights on each kernel matrix are included in the standard SVM empirical risk minimization problem with a l1 constraint to encourage sparsity. We propose an algorithm for solving this problem and provide an new insight on MKL algorithms based on block 1-norm regularization by showing that the two approaches are equivalent. Experimental results show that the resulting algorithm converges rapidly and its efficiency compares favorably to other MKL algorithms.


Machine Learning | 2010

Composite kernel learning

Marie Szafranski; Yves Grandvalet; Alain Rakotomamonjy

The Support Vector Machine is an acknowledged powerful tool for building classifiers, but it lacks flexibility, in the sense that the kernel is chosen prior to learning. Multiple Kernel Learning enables to learn the kernel, from an ensemble of basis kernels, whose combination is optimized in the learning process. Here, we propose Composite Kernel Learning to address the situation where distinct components give rise to a group structure among kernels. Our formulation of the learning problem encompasses several setups, putting more or less emphasis on the group structure. We characterize the convexity of the learning problem, and provide a general wrapper algorithm for computing solutions. Finally, we illustrate the behavior of our method on multi-channel data where groups correspond to channels.


international conference on artificial neural networks | 1998

Least Absolute Shrinkage is Equivalent to Quadratic Penalization

Yves Grandvalet

Adaptive ridge is a special form of ridge regression, balancing the quadratic penalization on each parameter of the model. This paper shows the equivalence between adaptive ridge and lasso (least absolute shrinkage and selection operator). This equivalence states that both procedures produce the same estimate. Least absolute shrinkage can thus be viewed as a particular quadratic penalization.


Machine Learning | 2004

Bagging Equalizes Influence

Yves Grandvalet

Bagging constructs an estimator by averaging predictors trained on bootstrap samples. Bagged estimates almost consistently improve on the original predictor. It is thus important to understand the reasons for this success, and also for the occasional failures. It is widely believed that bagging is effective thanks to the variance reduction stemming from averaging predictors. However, seven years from its introduction, bagging is still not fully understood. This paper provides experimental evidence supporting the hypothesis that bagging stabilizes prediction by equalizing the influence of training examples. This effect is detailed in two different frameworks: estimation on the real line and regression. Bagging’s improvements/deteriorations are explained by the goodness/badness of highly influential examples, in situations where the usual variance reduction argument is at best questionable. Finally, reasons for the equalization effect are advanced. They support that other resampling strategies such as half-sampling should provide qualitatively identical effects while being computationally less demanding than bootstrap sampling.


Neural Computation | 1997

Noise injection: theoretical prospects

Yves Grandvalet; Stéphane Canu; Stéphane Boucheron

Noise injection consists of adding noise to the inputs during neural network training. Experimental results suggest that it might improve the generalization ability of the resulting neural network. A justification of this improvement remains elusive: describing analytically the average perturbed cost function is difficult, and controlling the fluctuations of the random perturbed cost function is hard. Hence, recent papers suggest replacing the random perturbed cost by a (deterministic) Taylor approximation of the average perturbed cost function. This article takes a different stance: when the injected noise is gaussian, noise injection is naturally connected to the action of the heat kernel. This provides indications on the relevance domain of traditional Taylor expansions and shows the dependence of the quality of Taylor approximations on global smoothness properties of neural networks under consideration. The connection between noise injection and heat kernel also enables controlling the fluctuations of the random perturbed cost function. Under the global smoothness assumption, tools from gaussian analysis provide bounds on the tail behavior of the perturbed cost. This finally suggests that mixing input perturbation with smoothness-based penalization might be profitable.


Statistics and Computing | 2011

Inferring multiple graphical structures

Julien Chiquet; Yves Grandvalet; Christophe Ambroise

Gaussian Graphical Models provide a convenient framework for representing dependencies between variables. Recently, this tool has received a high interest for the discovery of biological networks. The literature focuses on the case where a single network is inferred from a set of measurements. But, as wetlab data is typically scarce, several assays, where the experimental conditions affect interactions, are usually merged to infer a single network. In this paper, we propose two approaches for estimating multiple related graphs, by rendering the closeness assumption into an empirical prior or group penalties. We provide quantitative results demonstrating the benefits of the proposed approaches. The methods presented in this paper are embeded in the R package simone from version 1.0-0 and later.


PLOS ONE | 2011

Integrated Proteomic and Transcriptomic Investigation of the Acetaminophen Toxicity in Liver Microfluidic Biochip

Jean Matthieu Prot; Anne-Sophie Briffaut; Franck Letourneur; Philippe Chafey; Franck Merlier; Yves Grandvalet; Cécile Legallais; Eric Leclerc

Microfluidic bioartificial organs allow the reproduction of in vivo-like properties such as cell culture in a 3D dynamical micro environment. In this work, we established a method and a protocol for performing a toxicogenomic analysis of HepG2/C3A cultivated in a microfluidic biochip. Transcriptomic and proteomic analyses have shown the induction of the NRF2 pathway and the related drug metabolism pathways when the HepG2/C3A cells were cultivated in the biochip. The induction of those pathways in the biochip enhanced the metabolism of the N-acetyl-p-aminophenol drug (acetaminophen-APAP) when compared to Petri cultures. Thus, we observed 50% growth inhibition of cell proliferation at 1 mM in the biochip, which appeared similar to human plasmatic toxic concentrations reported at 2 mM. The metabolic signature of APAP toxicity in the biochip showed similar biomarkers as those reported in vivo, such as the calcium homeostasis, lipid metabolism and reorganization of the cytoskeleton, at the transcriptome and proteome levels (which was not the case in Petri dishes). These results demonstrate a specific molecular signature for acetaminophen at transcriptomic and proteomic levels closed to situations found in vivo. Interestingly, a common component of the signature of the APAP molecule was identified in Petri and biochip cultures via the perturbations of the DNA replication and cell cycle. These findings provide an important insight into the use of microfluidic biochips as new tools in biomarker research in pharmaceutical drug studies and predictive toxicity investigations.


systems man and cybernetics | 1995

Comments on "Noise injection into inputs in back propagation learning"

Yves Grandvalet; Stéphane Canu

The generalization capacity of neural networks learning from examples is important. Several authors showed experimentally that training a neural network with noise injected inputs could improve its generalization abilities. In the original paper (ibid., vol. 22, no. 3. p. 436-40, 1992), Matsuoka explained this fact in a formal way, claiming that using noise injected inputs is equivalent to reduce the sensitivity of the network. However, the author states that an error in Matsuokas calculations lead him to inadequate conclusions. This paper corrects these calculations and conclusions. >


Information Fusion | 2003

Resample and combine: an approach to improving uncertainty representation in evidential pattern classification

J. François; Yves Grandvalet; Thierry Denœux; J.-M. Roger

Abstract Uncertainty representation is a major issue in pattern recognition. In many applications, the outputs of a classifier do not lead directly to a final decision, but are used in combination with other systems, or as input to an interactive decision process. In such contexts, it may be advantageous to resort to rich and flexible formalisms for representing and manipulating uncertain information. This paper addresses the issue of uncertainty representation in pattern classification, in the framework of the Dempster–Shafer theory of evidence. It is shown that the quality and reliability of the outputs of a classifier may be improved using a variant of bagging, a resample-and-combine approach introduced by Breiman in a conventional statistical context. This technique is explained and studied experimentally on simulated data and on a character recognition application. In particular, results show that bagging improves classification accuracy and limits the influence of outliers and ambiguous training patterns.


Journal of Artificial Intelligence Research | 2016

Combining two and three-way embedding models for link prediction in knowledge bases

Alberto García-Durán; Antoine Bordes; Nicolas Usunier; Yves Grandvalet

This paper tackles the problem of endogenous link prediction for knowledge base completion. Knowledge bases can be represented as directed graphs whose nodes correspond to entities and edges to relationships. Previous attempts either consist of powerful systems with high capacity to model complex connectivity patterns, which unfortunately usually end up overfitting on rare relationships, or in approaches that trade capacity for simplicity in order to fairly model all relationships, frequent or not. In this paper, we propose TATEC, a happy medium obtained by complementing a high-capacity model with a simpler one, both pre-trained separately and then combined. We present several variants of this model with different kinds of regularization and combination strategies and show that this approach outperforms existing methods on different types of relationships by achieving state-of-the-art results on four benchmarks of the literature.

Collaboration


Dive into the Yves Grandvalet's collaboration.

Top Co-Authors

Avatar

Stéphane Canu

Institut national des sciences appliquées de Rouen

View shared research outputs
Top Co-Authors

Avatar

Christophe Ambroise

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aurore Lomet

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Gérard Govaert

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Julien Chiquet

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Marie Szafranski

Centre national de la recherche scientifique

View shared research outputs
Researchain Logo
Decentralizing Knowledge