Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yingzhen Li is active.

Publication


Featured researches published by Yingzhen Li.


neural information processing systems | 2015

Stochastic expectation propagation

Yingzhen Li; José Miguel Hernández-Lobato; Richard E. Turner

Expectation propagation (EP) is a deterministic approximation algorithm that is often used to perform approximate Bayesian parameter learning. EP approximates the full intractable posterior distribution through a set of local approximations that are iteratively refined for each datapoint. EP can offer analytic and computational advantages over other approximations, such as Variational Inference (VI), and is the method of choice for a number of models. The local nature of EP appears to make it an ideal candidate for performing Bayesian learning on large models in large-scale dataset settings. However, EP has a crucial limitation in this context: the number of approximating factors needs to increase with the number of data-points, N, which often entails a prohibitively large memory overhead. This paper presents an extension to EP, called stochastic expectation propagation (SEP), that maintains a global posterior approximation (like VI) but updates it in a local way (like EP). Experiments on a number of canonical learning problems using synthetic and real-world datasets indicate that SEP performs almost as well as full EP, but reduces the memory consumption by a factor of N. SEP is therefore ideally suited to performing approximate Bayesian learning in the large model, large dataset setting.


arXiv: Machine Learning | 2018

Training Deep Gaussian Processes using Stochastic Expectation Propagation and Probabilistic Backpropagation

Thang D. Bui; José Miguel Hernández-Lobato; Yingzhen Li; Daniel Hernández-Lobato; Richard E. Turner

Deep Gaussian processes (DGPs) are multi-layer hierarchical generalisations of Gaussian processes (GPs) and are formally equivalent to neural networks with multiple, infinitely wide hidden layers. DGPs are probabilistic and non-parametric and as such are arguably more flexible, have a greater capacity to generalise, and provide better calibrated uncertainty estimates than alternative deep models. The focus of this paper is scalable approximate Bayesian learning of these networks. The paper develops a novel and efficient extension of probabilistic backpropagation, a state-of-the-art method for training Bayesian neural networks, that can be used to train DGPs. The new method leverages a recently proposed method for scaling Expectation Propagation, called stochastic Expectation Propagation. The method is able to automatically discover useful input warping, expansion or compression, and it is therefore is a flexible form of Bayesian kernel design. We demonstrate the success of the new method for supervised learning on several real-world datasets, showing that it typically outperforms GP regression and is never much worse.


international conference on machine learning | 2016

Black-box α-divergence minimization

José Miguel Hernández-Lobato; Yingzhen Li; Mark Rowland; Daniel Hernández-Lobato; Thang D. Bui; Richard E. Turner

Black-box alpha (BB-α) is a new approximate inference method based on the minimization of α-divergences. BB-α scales to large datasets because it can be implemented using stochastic gradient descent. BB-α can be applied to complex probabilistic models with little effort since it only requires as input the likelihood function and its gradients. These gradients can be easily obtained using automatic differentiation. By changing the divergence parameter α, the method is able to interpolate between variational Bayes (VB) (α → 0) and an algorithm similar to expectation propagation (EP) (α = 1). Experiments on probit regression and neural network regression and classification problems show that BB-α with non-standard settings of α, such as α = 0:5, usually produces better predictions than with α → 0 (VB) or α = 1 (EP).


arXiv: Machine Learning | 2018

Stochastic Expectation Propagation for Large Scale Gaussian Process Classification

Daniel Hernández-Lobato; José Miguel Hernández-Lobato; Yingzhen Li; Thang D. Bui; Richard E. Turner

A method for large scale Gaussian process classification has been recently proposed based on expectation propagation (EP). Such a method allows Gaussian process classifiers to be trained on very large datasets that were out of the reach of previous deployments of EP and has been shown to be competitive with related techniques based on stochastic variational inference. Nevertheless, the memory resources required scale linearly with the dataset size, unlike in variational methods. This is a severe limitation when the number of instances is very large. Here we show that this problem is avoided when stochastic EP is used to train the model.


neural information processing systems | 2016

Rényi Divergence Variational Inference

Yingzhen Li; Richard E. Turner


international conference on machine learning | 2016

Deep Gaussian processes for regression using approximate expectation propagation

Thang D. Bui; José Miguel Hernández-Lobato; Daniel Hernández-Lobato; Yingzhen Li; Richard E. Turner


international conference on machine learning | 2016

Black-Box Alpha Divergence Minimization

José Miguel Hernández-Lobato; Yingzhen Li; Mark Rowland; Thang D. Bui; Daniel Hernández-Lobato; Richard E. Turner


international conference on learning representations | 2018

Variational Continual Learning

Cuong V. Nguyen; Yingzhen Li; Thang D. Bui; Richard E. Turner


arXiv: Machine Learning | 2017

Approximate Inference with Amortised MCMC.

Yingzhen Li; Richard E. Turner; Qiang Liu


arXiv: Machine Learning | 2016

Variational Inference with Rényi Divergence.

Yingzhen Li; Richard E. Turner

Collaboration


Dive into the Yingzhen Li's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Thang D. Bui

University of Cambridge

View shared research outputs
Top Co-Authors

Avatar

Daniel Hernández-Lobato

Autonomous University of Madrid

View shared research outputs
Top Co-Authors

Avatar

Mark Rowland

University of Cambridge

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge