Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where José Miguel Hernández-Lobato is active.

Publication


Featured researches published by José Miguel Hernández-Lobato.


ACS central science | 2018

Automatic Chemical Design Using a Data-Driven Continuous Representation of Molecules

Rafael Gómez-Bombarelli; Jennifer Wei; David K. Duvenaud; José Miguel Hernández-Lobato; Benjamin Sanchez-Lengeling; Dennis Sheberla; Jorge Aguilera-Iparraguirre; Timothy D. Hirzel; Ryan P. Adams; Alán Aspuru-Guzik

We report a method to convert discrete representations of molecules to and from a multidimensional continuous representation. This model allows us to generate new molecules for efficient exploration and optimization through open-ended spaces of chemical compounds. A deep neural network was trained on hundreds of thousands of existing chemical structures to construct three coupled functions: an encoder, a decoder, and a predictor. The encoder converts the discrete representation of a molecule into a real-valued continuous vector, and the decoder converts these continuous vectors back to discrete molecular representations. The predictor estimates chemical properties from the latent continuous vector representation of the molecule. Continuous representations of molecules allow us to automatically generate novel chemical structures by performing simple operations in the latent space, such as decoding random vectors, perturbing known chemical structures, or interpolating between molecules. Continuous representations also allow the use of powerful gradient-based optimization to efficiently guide the search for optimized functional compounds. We demonstrate our method in the domain of drug-like molecules and also in a set of molecules with fewer that nine heavy atoms.


international symposium on computer architecture | 2016

Minerva: enabling low-power, highly-accurate deep neural network accelerators

Brandon Reagen; Paul N. Whatmough; Robert Adolf; Saketh Rama; Hyunkwang Lee; Sae Kyu Lee; José Miguel Hernández-Lobato; Gu-Yeon Wei; David M. Brooks

The continued success of Deep Neural Networks (DNNs) in classification tasks has sparked a trend of accelerating their execution with specialized hardware. While published designs easily give an order of magnitude improvement over general-purpose hardware, few look beyond an initial implementation. This paper presents Minerva, a highly automated co-design approach across the algorithm, architecture, and circuit levels to optimize DNN hardware accelerators. Compared to an established fixed-point accelerator baseline, we show that fine-grained, heterogeneous datatype optimization reduces power by 1.5×; aggressive, inline predication and pruning of small activity values further reduces power by 2.0×; and active hardware fault detection coupled with domain-aware error mitigation eliminates an additional 2.7× through lowering SRAM voltages. Across five datasets, these optimizations provide a collective average of 8.1× power reduction over an accelerator baseline without compromising DNN model accuracy. Minerva enables highly accurate, ultra-low power DNN accelerators (in the range of tens of milliwatts), making it feasible to deploy DNNs in power-constrained IoT and mobile devices.


neural information processing systems | 2015

Stochastic expectation propagation

Yingzhen Li; José Miguel Hernández-Lobato; Richard E. Turner

Expectation propagation (EP) is a deterministic approximation algorithm that is often used to perform approximate Bayesian parameter learning. EP approximates the full intractable posterior distribution through a set of local approximations that are iteratively refined for each datapoint. EP can offer analytic and computational advantages over other approximations, such as Variational Inference (VI), and is the method of choice for a number of models. The local nature of EP appears to make it an ideal candidate for performing Bayesian learning on large models in large-scale dataset settings. However, EP has a crucial limitation in this context: the number of approximating factors needs to increase with the number of data-points, N, which often entails a prohibitively large memory overhead. This paper presents an extension to EP, called stochastic expectation propagation (SEP), that maintains a global posterior approximation (like VI) but updates it in a local way (like EP). Experiments on a number of canonical learning problems using synthetic and real-world datasets indicate that SEP performs almost as well as full EP, but reduces the memory consumption by a factor of N. SEP is therefore ideally suited to performing approximate Bayesian learning in the large model, large dataset setting.


european conference on machine learning | 2010

Expectation propagation for Bayesian multi-task feature selection

Daniel Hernández-Lobato; José Miguel Hernández-Lobato; Thibault Helleputte; Pierre Dupont

In this paper we propose a Bayesian model for multitask feature selection. This model is based on a generalized spike and slab sparse prior distribution that enforces the selection of a common subset of features across several tasks. Since exact Bayesian inference in this model is intractable, approximate inference is performed through expectation propagation (EP). EP approximates the posterior distribution of the model using a parametric probability distribution. This posterior approximation is particularly useful to identify relevant features for prediction. We focus on problems for which the number of features d is significantly larger than the number of instances for each task. We propose an efficient parametrization of the EP algorithm that offers a computational complexity linear in d. Experiments on several multitask datasets show that the proposed model outperforms baseline approaches for single-task learning or data pooling across all tasks, as well as two state-of-the-art multi-task learning approaches. Additional experiments confirm the stability of the proposed feature selection with respect to various sub-samplings of the training data.


Machine Learning | 2015

Expectation propagation in linear regression models with spike-and-slab priors

José Miguel Hernández-Lobato; Daniel Hernández-Lobato; Alberto Suárez

An expectation propagation (EP) algorithm is proposed for approximate inference in linear regression models with spike-and-slab priors. This EP method is applied to regression tasks in which the number of training instances is small and the number of dimensions of the feature space is large. The problems analyzed include the reconstruction of genetic networks, the recovery of sparse signals, the prediction of user sentiment from customer-written reviews and the analysis of biscuit dough constituents from NIR spectra. The proposed EP method outperforms in most of these tasks another EP method that ignores correlations in the posterior and a variational Bayes technique for approximate inference. Additionally, the solutions generated by EP are very close to those given by Gibbs sampling, which can be taken as the gold standard but can be much more computationally expensive. In the tasks analyzed, spike-and-slab priors generally outperform other sparsifying priors, such as Laplace, Student’s


Pattern Recognition Letters | 2010

Expectation Propagation for microarray data classification

Daniel Hernández-Lobato; José Miguel Hernández-Lobato; Alberto Suárez


Pattern Recognition | 2011

Network-based sparse Bayesian classification

José Miguel Hernández-Lobato; Daniel Hernández-Lobato; Alberto Suárez

t


international conference on machine learning | 2017

Sequence tutor: Conservative fine-tuning of sequence generation models with KL-control

Natasha Jaques; Shixiang Gu; Dzmitry Bahdanau; José Miguel Hernández-Lobato; Richard E. Turner; Douglas Eck


Journal of Machine Learning Research | 2016

A general framework for constrained Bayesian optimization using information-based search

José Miguel Hernández-Lobato; Michael A. Gelbart; Ryan P. Adams; Matthew W. Hoffman; Zoubin Ghahramani

t and horseshoe priors. The key to the improved predictions with respect to Laplace and Student’s


Computational Statistics & Data Analysis | 2011

Semiparametric bivariate Archimedean copulas

José Miguel Hernández-Lobato; Alberto Suárez

Collaboration


Dive into the José Miguel Hernández-Lobato's collaboration.

Top Co-Authors

Avatar

Daniel Hernández-Lobato

Autonomous University of Madrid

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yingzhen Li

University of Cambridge

View shared research outputs
Top Co-Authors

Avatar

Alberto Suárez

Autonomous University of Madrid

View shared research outputs
Top Co-Authors

Avatar

Thang D. Bui

University of Cambridge

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Neil Houlsby

University of Cambridge

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge