Chunyuan Li
Duke University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Chunyuan Li.
meeting of the association for computational linguistics | 2017
Zhe Gan; Chunyuan Li; Changyou Chen; Yunchen Pu; Qinliang Su; Lawrence Carin
Recurrent neural networks (RNNs) have shown promising performance for language modeling. However, traditional training of RNNs using back-propagation through time often suffers from overfitting. One reason for this is that stochastic optimization (used for large training sets) does not provide good estimates of model uncertainty. This paper leverages recent advances in stochastic gradient Markov Chain Monte Carlo (also appropriate for large training sets) to learn weight uncertainty in RNNs. It yields a principled Bayesian learning algorithm, adding gradient noise during training (enhancing exploration of the model-parameter space) and model averaging when testing. Extensive experiments on various RNN models and across a broad range of applications demonstrate the superiority of the proposed approach relative to stochastic optimization.
computer vision and pattern recognition | 2016
Chunyuan Li; Andrew Stevens; Changyou Chen; Yunchen Pu; Zhe Gan; Lawrence Carin
Learning the representation of shape cues in 2D & 3D objects for recognition is a fundamental task in computer vision. Deep neural networks (DNNs) have shown promising performance on this task. Due to the large variability of shapes, accurate recognition relies on good estimates of model uncertainty, ignored in traditional training of DNNs, typically learned via stochastic optimization. This paper leverages recent advances in stochastic gradient Markov Chain Monte Carlo (SG-MCMC) to learn weight uncertainty in DNNs. It yields principled Bayesian interpretations for the commonly used Dropout/DropConnect techniques and incorporates them into the SG-MCMC framework. Extensive experiments on 2D & 3D shape datasets and various DNN models demonstrate the superiority of the proposed approach over stochastic optimization. Our approach yields higher recognition accuracy when used in conjunction with Dropout and Batch-Normalization.
neural information processing systems | 2016
Yunchen Pu; Zhe Gan; Ricardo Henao; Xin Yuan; Chunyuan Li; Andrew Stevens; Lawrence Carin
national conference on artificial intelligence | 2016
Chunyuan Li; Changyou Chen; David E. Carlson; Lawrence Carin
neural information processing systems | 2015
Zhe Gan; Chunyuan Li; Ricardo Henao; David E. Carlson; Lawrence Carin
neural information processing systems | 2017
Chunyuan Li; Hao Liu; Changyou Chen; Yunchen Pu; Liqun Chen; Ricardo Henao; Lawrence Carin
international conference on artificial intelligence and statistics | 2016
Changyou Chen; David E. Carlson; Zhe Gan; Chunyuan Li; Lawrence Carin
neural information processing systems | 2017
Yunchen Pu; Weiyao Wang; Ricardo Henao; Liqun Chen; Zhe Gan; Chunyuan Li; Lawrence Carin
neural information processing systems | 2017
Zhe Gan; Liqun Chen; Weiyao Wang; Yunchen Pu; Yizhe Zhang; Hao Liu; Chunyuan Li; Lawrence Carin
neural information processing systems | 2017
Yunchen Pu; Zhe Gan; Ricardo Henao; Chunyuan Li; Shaobo Han; Lawrence Carin