Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joaquin Quiñonero-Candela is active.

Publication


Featured researches published by Joaquin Quiñonero-Candela.


international conference on machine learning | 2005

Healing the relevance vector machine through augmentation

Carl Edward Rasmussen; Joaquin Quiñonero-Candela

The Relevance Vector Machine (RVM) is a sparse approximate Bayesian kernel method. It provides full predictive distributions for test cases. However, the predictive uncertainties have the unintuitive property, that they get smaller the further you move away from the training cases. We give a thorough analysis. Inspired by the analogy to non-degenerate Gaussian Processes, we suggest augmentation to solve the problem. The purpose of the resulting model, RVM*, is primarily to corroborate the theoretical and experimental analysis. Although RVM* could be used in practical applications, it is no longer a truly sparse model. Experiments show that sparsity comes at the expense of worse predictive. distributions.


international conference on acoustics, speech, and signal processing | 2002

Time series prediction based on the Relevance Vector Machine with adaptive kernels

Joaquin Quiñonero-Candela; Lars Kai Hansen

The Relevance Vector Machine (RVM) introduced by Tipping is a probabilistic model similar to the widespread Support Vector Machines (SVM), but where the training takes place in a Bayesian framework, and where predictive distributions of the outputs instead of point estimates are obtained. In this paper we focus on the use of RVMs for regression. We modify this method for training generalized linear models by adapting automatically the width of the basis functions to the optimal for the data at hand. Our Adaptive RVM is tried for prediction on the chaotic Mackey-Glass time series. Much superior performance than with the standard RVM and than with other methods like neural networks and local linear models is obtained.


international conference on machine learning | 2005

Evaluating predictive uncertainty challenge

Joaquin Quiñonero-Candela; Carl Edward Rasmussen; Fabian H. Sinz; Olivier Bousquet; Bernhard Schölkopf

This Chapter presents the PASCAL Evaluating Predictive Uncertainty Challenge, introduces the contributed Chapters by the participants who obtained outstanding results, and provides a discussion with some lessons to be learnt. The Challenge was set up to evaluate the ability of Machine Learning algorithms to provide good “probabilistic predictions”, rather than just the usual “point predictions” with no measure of uncertainty, in regression and classification problems. Parti-cipants had to compete on a number of regression and classification tasks, and were evaluated by both traditional losses that only take into account point predictions and losses we proposed that evaluate the quality of the probabilistic predictions.


Switching and Learning in Feedback Systems | 2003

Analysis of some methods for reduced rank gaussian process regression

Joaquin Quiñonero-Candela; Carl Edward Rasmussen

While there is strong motivation for using Gaussian Processes (GPs) due to their excellent performance in regression and classification problems, their computational complexity makes them impractical when the size of the training set exceeds a few thousand cases. This has motivated the recent proliferation of a number of cost-effective approximations to GPs, both for classification and for regression. In this paper we analyze one popular approximation to GPs for regression: the reduced rank approximation. While generally GPs are equivalent to infinite linear models, we show that Reduced Rank Gaussian Processes (RRGPs) are equivalent to finite sparse linear models. We also introduce the concept of degenerate GPs and show that they correspond to inappropriate priors. We show how to modify the RRGP to prevent it from being degenerate at test time. Training RRGPs consists both in learning the covariance function hyperparameters and the support set. We propose a method for learning hyperparameters for a given support set. We also review the Sparse Greedy GP (SGGP) approximation (Smola and Bartlett, 2001), which is a way of learning the support set for given hyperparameters based on approximating the posterior. We propose an alternative method to the SGGP that has better generalization capabilities. Finally we make experiments to compare the different ways of training a RRGP. We provide some Matlab code for learning RRGPs.


Journal of Machine Learning Research | 2005

A Unifying View of Sparse Approximate Gaussian Process Regression

Joaquin Quiñonero-Candela; Carl Edward Rasmussen


Journal of Machine Learning Research | 2010

Sparse Spectrum Gaussian Process Regression

Miguel Lázaro-Gredilla; Joaquin Quiñonero-Candela; Carl Edward Rasmussen; Aníbal R. Figueiras-Vidal


Archive | 2007

Approximation Methods for Gaussian Process Regression

Joaquin Quiñonero-Candela; Carl Edward Rasmussen; Cki Williams; Bottou; O. Chapelle; D. DeCoste; Jason Weston


neural information processing systems | 2003

Multiple-step ahead prediction for non linear dynamic systems: A Gaussian Process treatment with propagation of the uncertainty

Agathe Girard; Carl Edward Rasmussen; Joaquin Quiñonero-Candela; Roderick Murray-Smith; Becker; S. Thrun; K. Obermayer


Archive | 2003

Prediction at an uncertain input for Gaussian processes and relevance vector machines - application to multiple-step ahead time-series forecasting

Joaquin Quiñonero-Candela; Agathe Girard; Carl Edward Rasmussen


neural information processing systems | 2002

Incremental Gaussian Processes

Joaquin Quiñonero-Candela; Ole Winther

Collaboration


Dive into the Joaquin Quiñonero-Candela's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Florence d'Alché-Buc

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lars Kai Hansen

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar

Ole Winther

Technical University of Denmark

View shared research outputs
Researchain Logo
Decentralizing Knowledge