Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Matthew D. Zeiler is active.

Publication


Featured researches published by Matthew D. Zeiler.


european conference on computer vision | 2014

Visualizing and Understanding Convolutional Networks

Matthew D. Zeiler; Rob Fergus

Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark Krizhevsky et al. [18]. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we explore both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. Used in a diagnostic role, these visualizations allow us to find model architectures that outperform Krizhevsky et al on the ImageNet classification benchmark. We also perform an ablation study to discover the performance contribution from different model layers. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets.


international conference on computer vision | 2011

Adaptive deconvolutional networks for mid and high level feature learning

Matthew D. Zeiler; Graham W. Taylor; Rob Fergus

We present a hierarchical model that learns image decompositions via alternating layers of convolutional sparse coding and max pooling. When trained on natural images, the layers of our model capture image information in a variety of forms: low-level edges, mid-level edge junctions, high-level object parts and complete objects. To build our model we rely on a novel inference scheme that ensures each layer reconstructs the input, rather than just the output of the layer directly beneath, as is common with existing hierarchical approaches. This makes it possible to learn multiple layers of representation and we show models with 4 layers, trained on images from the Caltech-101 and 256 datasets. When combined with a standard classifier, features extracted from these models outperform SIFT, as well as representations from other feature learning methods.


international conference on acoustics, speech, and signal processing | 2013

On rectified linear units for speech processing

Matthew D. Zeiler; Marc'Aurelio Ranzato; Rajat Monga; Mark Mao; Ke Yang; Quoc V. Le; Patrick Nguyen; Andrew W. Senior; Vincent Vanhoucke; Jeffrey Dean; Geoffrey E. Hinton

Deep neural networks have recently become the gold standard for acoustic modeling in speech recognition systems. The key computational unit of a deep network is a linear projection followed by a point-wise non-linearity, which is typically a logistic function. In this work, we show that we can improve generalization and make training of deep networks faster and simpler by substituting the logistic units with rectified linear units. These units are linear when their input is positive and zero otherwise. In a supervised setting, we can successfully train very deep nets from random initialization on a large vocabulary speech recognition task achieving lower word error rates than using a logistic network with the same topology. Similarly in an unsupervised setting, we show how we can learn sparse features that can be useful for discriminative tasks. All our experiments are executed in a distributed environment using several hundred machines and several hundred hours of speech data.


arXiv: Learning | 2012

ADADELTA: An Adaptive Learning Rate Method

Matthew D. Zeiler


international conference on machine learning | 2013

Regularization of Neural Networks using DropConnect

Li Wan; Matthew D. Zeiler; Sixin Zhang; Yann Le Cun; Rob Fergus


computer vision and pattern recognition | 2010

Deconvolutional networks

Matthew D. Zeiler; Dilip Krishnan; Graham W. Taylor; Rob Fergus


international conference on learning representations | 2013

Stochastic Pooling for Regularization of Deep Convolutional Neural Networks

Matthew D. Zeiler; Rob Fergus


Archive | 2013

Visualizing and Understanding Convolutional Neural Networks

Matthew D. Zeiler; Rob Fergus


neural information processing systems | 2011

Facial Expression Transfer with Input-Output Temporal Restricted Boltzmann Machines

Matthew D. Zeiler; Graham W. Taylor; Leonid Sigal; Iain A. Matthews; Rob Fergus


the european symposium on artificial neural networks | 2009

Modeling pigeon behavior using a Conditional Restricted Boltzmann Machine.

Matthew D. Zeiler; Graham W. Taylor; Nikolaus F. Troje; Geoffrey E. Hinton

Collaboration


Dive into the Matthew D. Zeiler's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ke Yang

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Li Wan

New York University

View shared research outputs
Researchain Logo
Decentralizing Knowledge