Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel Hernández-Lobato is active.

Publication


Featured researches published by Daniel Hernández-Lobato.


Neurocomputing | 2011

Empirical analysis and evaluation of approximate techniques for pruning regression bagging ensembles

Daniel Hernández-Lobato; Gonzalo Martínez-Muñoz; Alberto Suárez

Identifying the optimal subset of regressors in a regression bagging ensemble is a difficult task that has exponential cost in the size of the ensemble. In this article we analyze two approximate techniques especially devised to address this problem. The first strategy constructs a relaxed version of the problem that can be solved using semidefinite programming. The second one is based on modifying the order of aggregation of the regressors. Ordered aggregation is a simple forward selection algorithm that incorporates at each step the regressor that reduces the training error of the current subensemble the most. Both techniques can be used to identify subensembles that are close to the optimal ones, which can be obtained by exhaustive search at a larger computational cost. Experiments in a wide variety of synthetic and real-world regression problems show that pruned ensembles composed of only 20% of the initial regressors often have better generalization performance than the original bagging ensembles. These improvements are due to a reduction in the bias and the covariance components of the generalization error. Subensembles obtained using either SDP or ordered aggregation generally outperform subensembles obtained by other ensemble pruning methods and ensembles generated by the Adaboost.R2 algorithm, negative correlation learning or regularized linear stacked generalization. Ordered aggregation has a slightly better overall performance than SDP in the problems investigated. However, the difference is not statistically significant. Ordered aggregation has the further advantage that it produces a nested sequence of near-optimal subensembles of increasing size with no additional computational cost.


Neurocomputing | 2008

Class-switching neural network ensembles

Gonzalo Martínez-Muñoz; Aitor Sánchez-Martínez; Daniel Hernández-Lobato; Alberto Suárez

This article investigates the properties of class-switching ensembles composed of neural networks and compares them to class-switching ensembles of decision trees and to standard ensemble learning methods, such as bagging and boosting. In a class-switching ensemble, each learner is constructed using a modified version of the training data. This modification consists in switching the class labels of a fraction of training examples that are selected at random from the original training set. Experiments on 20 benchmark classification problems, including real-world and synthetic data, show that class-switching ensembles composed of neural networks can obtain significant improvements in the generalization accuracy over single neural networks and bagging and boosting ensembles. Furthermore, it is possible to build medium-sized ensembles (~200 networks) whose classification performance is comparable to larger class-switching ensembles (~1000 learners) of unpruned decision trees.


european conference on machine learning | 2010

Expectation propagation for Bayesian multi-task feature selection

Daniel Hernández-Lobato; José Miguel Hernández-Lobato; Thibault Helleputte; Pierre Dupont

In this paper we propose a Bayesian model for multitask feature selection. This model is based on a generalized spike and slab sparse prior distribution that enforces the selection of a common subset of features across several tasks. Since exact Bayesian inference in this model is intractable, approximate inference is performed through expectation propagation (EP). EP approximates the posterior distribution of the model using a parametric probability distribution. This posterior approximation is particularly useful to identify relevant features for prediction. We focus on problems for which the number of features d is significantly larger than the number of instances for each task. We propose an efficient parametrization of the EP algorithm that offers a computational complexity linear in d. Experiments on several multitask datasets show that the proposed model outperforms baseline approaches for single-task learning or data pooling across all tasks, as well as two state-of-the-art multi-task learning approaches. Additional experiments confirm the stability of the proposed feature selection with respect to various sub-samplings of the training data.


Machine Learning | 2015

Expectation propagation in linear regression models with spike-and-slab priors

José Miguel Hernández-Lobato; Daniel Hernández-Lobato; Alberto Suárez

An expectation propagation (EP) algorithm is proposed for approximate inference in linear regression models with spike-and-slab priors. This EP method is applied to regression tasks in which the number of training instances is small and the number of dimensions of the feature space is large. The problems analyzed include the reconstruction of genetic networks, the recovery of sparse signals, the prediction of user sentiment from customer-written reviews and the analysis of biscuit dough constituents from NIR spectra. The proposed EP method outperforms in most of these tasks another EP method that ignores correlations in the posterior and a variational Bayes technique for approximate inference. Additionally, the solutions generated by EP are very close to those given by Gibbs sampling, which can be taken as the gold standard but can be much more computationally expensive. In the tasks analyzed, spike-and-slab priors generally outperform other sparsifying priors, such as Laplace, Student’s


Pattern Recognition Letters | 2010

Expectation Propagation for microarray data classification

Daniel Hernández-Lobato; José Miguel Hernández-Lobato; Alberto Suárez


international joint conference on neural network | 2006

Pruning in Ordered Regression Bagging Ensembles

Daniel Hernández-Lobato; Gonzalo Martínez-Muñoz; Alberto Suárez

t


Pattern Recognition | 2011

Network-based sparse Bayesian classification

José Miguel Hernández-Lobato; Daniel Hernández-Lobato; Alberto Suárez


international conference on multiple classifier systems | 2010

A double pruning algorithm for classification ensembles

Victor Soto; Gonzalo Martínez-Muñoz; Daniel Hernández-Lobato; Alberto Suárez

t and horseshoe priors. The key to the improved predictions with respect to Laplace and Student’s


international conference on artificial neural networks | 2007

Selection of decision stumps in bagging ensembles

Gonzalo Martínez-Muñoz; Daniel Hernández-Lobato; Alberto Suárez


IEEE Transactions on Systems, Man, and Cybernetics | 2014

A double pruning scheme for boosting ensembles.

Victor Soto; Sergio García-Moratilla; Gonzalo Martínez-Muñoz; Daniel Hernández-Lobato; Alberto Suárez

t

Collaboration


Dive into the Daniel Hernández-Lobato's collaboration.

Top Co-Authors

Avatar

Alberto Suárez

Autonomous University of Madrid

View shared research outputs
Top Co-Authors

Avatar

Gonzalo Martínez-Muñoz

Autonomous University of Madrid

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pierre Dupont

Université catholique de Louvain

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Thang D. Bui

University of Cambridge

View shared research outputs
Top Co-Authors

Avatar

Yingzhen Li

University of Cambridge

View shared research outputs
Top Co-Authors

Avatar

Aitor Sánchez-Martínez

Autonomous University of Madrid

View shared research outputs
Top Co-Authors

Avatar

Pablo Morales-Mombiela

Autonomous University of Madrid

View shared research outputs
Researchain Logo
Decentralizing Knowledge