Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Roger B. Grosse is active.

Publication


Featured researches published by Roger B. Grosse.


international conference on machine learning | 2009

Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations

Honglak Lee; Roger B. Grosse; Rajesh Ranganath; Andrew Y. Ng

There has been much interest in unsupervised learning of hierarchical generative models such as deep belief networks. Scaling such models to full-sized, high-dimensional images remains a difficult problem. To address this problem, we present the convolutional deep belief network, a hierarchical generative model which scales to realistic image sizes. This model is translation-invariant and supports efficient bottom-up and top-down probabilistic inference. Key to our approach is probabilistic max-pooling, a novel technique which shrinks the representations of higher layers in a probabilistically sound way. Our experiments show that the algorithm learns useful high-level visual features, such as object parts, from unlabeled images of objects and natural scenes. We demonstrate excellent performance on several visual recognition tasks and show that our model can perform hierarchical (bottom-up and top-down) inference over full-sized images.


Communications of The ACM | 2011

Unsupervised learning of hierarchical representations with convolutional deep belief networks

Honglak Lee; Roger B. Grosse; Rajesh Ranganath; Andrew Y. Ng

There has been much interest in unsupervised learning of hierarchical generative models such as deep belief networks (DBNs); however, scaling such models to full-sized, high-dimensional images remains a difficult problem. To address this problem, we present the convolutional deep belief network, a hierarchical generative model that scales to realistic image sizes. This model is translation-invariant and supports efficient bottom-up and top-down probabilistic inference. Key to our approach is probabilistic max-pooling, a novel technique that shrinks the representations of higher layers in a probabilistically sound way. Our experiments show that the algorithm learns useful high-level visual features, such as object parts, from unlabeled images of objects and natural scenes. We demonstrate excellent performance on several visual recognition tasks and show that our model can perform hierarchical (bottom-up and top-down) inference over full-sized images.


international conference on computer vision | 2009

Ground truth dataset and baseline evaluations for intrinsic image algorithms

Roger B. Grosse; Micah K. Johnson; Edward H. Adelson; William T. Freeman

The intrinsic image decomposition aims to retrieve “intrinsic” properties of an image, such as shading and reflectance. To make it possible to quantitatively compare different approaches to this problem in realistic settings, we present a ground-truth dataset of intrinsic image decompositions for a variety of real-world objects. For each object, we separate an image of it into three components: Lambertian shading, reflectance, and specularities. We use our dataset to quantitatively compare several existing algorithms; we hope that this dataset will serve as a means for evaluating future work on intrinsic images.


neural information processing systems | 2013

Annealing between distributions by averaging moments

Roger B. Grosse; Chris J. Maddison; Ruslan Salakhutdinov

Many powerful Monte Carlo techniques for estimating partition functions, such as annealed importance sampling (AIS), are based on sampling from a sequence of intermediate distributions which interpolate between a tractable initial distribution and the intractable target distribution. The near-universal practice is to use geometric averages of the initial and target distributions, but alternative paths can perform substantially better. We present a novel sequence of intermediate distributions for exponential families defined by averaging the moments of the initial and target distributions. We analyze the asymptotic performance of both the geometric and moment averages paths and derive an asymptotically optimal piecewise linear schedule. AIS with moment averaging performs well empirically at estimating partition functions of restricted Boltzmann machines (RBMs), which form the building blocks of many deep learning models.


International Statistical Review | 2016

Statistical Inference, Learning and Models in Big Data

Beate Franke; Jean-François Plante; Ribana Roscher; En shiun Annie Lee; Cathal Smyth; Armin Hatefi; Fuqi Chen; Einat Gil; Alexander G. Schwing; Alessandro Selvitella; Michael M. Hoffman; Roger B. Grosse; Dieter Hendricks; N. Reid

Summary The need for new methods to deal with big data is a common theme in most scientific fields, although its definition tends to vary with the context. Statistical ideas are an essential part of this, and as a partial response, a thematic program on statistical inference, learning and models in big data was held in 2015 in Canada, under the general direction of the Canadian Statistical Sciences Institute, with major funding from, and most activities located at, the Fields Institute for Research in Mathematical Sciences. This paper gives an overview of the topics covered, describing challenges and strategies that seem common to many different areas of application and including some examples of applications to make these challenges and strategies more concrete.


international conference on learning representations | 2016

Importance Weighted Autoencoders

Yuri Burda; Roger B. Grosse; Ruslan Salakhutdinov


uncertainty in artificial intelligence | 2007

Shift-invariant sparse coding for audio classification

Roger B. Grosse; Rajat Raina; Helen Kwong; Andrew Y. Ng


international conference on machine learning | 2013

Structure Discovery in Nonparametric Regression through Compositional Kernel Search

David K. Duvenaud; James Robert Lloyd; Roger B. Grosse; Joshua B. Tenenbaum; Ghahramani Zoubin


international conference on machine learning | 2015

Optimizing Neural Networks with Kronecker-factored Approximate Curvature

James Martens; Roger B. Grosse


international conference on learning representations | 2017

On the Quantitative Analysis of Decoder-Based Generative Models

Yuhuai Wu; Yuri Burda; Ruslan Salakhutdinov; Roger B. Grosse

Collaboration


Dive into the Roger B. Grosse's collaboration.

Top Co-Authors

Avatar

Jimmy Ba

University of Toronto

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yuhuai Wu

University of Toronto

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge