Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David Eigen is active.

Publication


Featured researches published by David Eigen.


international conference on computer vision | 2015

Predicting Depth, Surface Normals and Semantic Labels with a Common Multi-scale Convolutional Architecture

David Eigen; Rob Fergus

In this paper we address three different computer vision tasks using a single basic architecture: depth prediction, surface normal estimation, and semantic labeling. We use a multiscale convolutional network that is able to adapt easily to each task using only small modifications, regressing from the input image to the output map directly. Our method progressively refines predictions using a sequence of scales, and captures many image details without any superpixels or low-level segmentation. We achieve state-of-the-art performance on benchmarks for all three tasks.


international conference on computer vision | 2013

Restoring an Image Taken through a Window Covered with Dirt or Rain

David Eigen; Dilip Krishnan; Rob Fergus

Photographs taken through a window are often compromised by dirt or rain present on the window surface. Common cases of this include pictures taken from inside a vehicle, or outdoor security cameras mounted inside a protective enclosure. At capture time, defocus can be used to remove the artifacts, but this relies on achieving a shallow depth-of-field and placement of the camera close to the window. Instead, we present a post-capture image processing solution that can remove localized rain and dirt artifacts from a single image. We collect a dataset of clean/corrupted image pairs which are then used to train a specialized form of convolutional neural network. This learns how to map corrupted image patches to clean ones, implicitly capturing the characteristic appearance of dirt and water droplets in natural images. Our models demonstrate effective removal of dirt and rain in outdoor test conditions.


computer vision and pattern recognition | 2012

Nonparametric image parsing using adaptive neighbor sets

David Eigen; Rob Fergus

This paper proposes a non-parametric approach to scene parsing inspired by the work of Tighe and Lazebnik [22]. In their approach, a simple kNN scheme with multiple descriptor types is used to classify super-pixels. We add two novel mechanisms: (i) a principled and efficient method for learning per-descriptor weights that minimizes classification error, and (ii) a context-driven adaptation of the training set used for each query, which conditions on common classes (which are relatively easy to classify) to improve performance on rare ones. The first technique helps to remove extraneous descriptors that result from the imperfect distance metrics/representations of each super-pixel. The second contribution re-balances the class frequencies, away from the highly-skewed distribution found in real-world scenes. Both methods give a significant performance boost over [22] and the overall system achieves state-of-the-art performance on the SIFT-Flow dataset.


computer vision and pattern recognition | 2015

End-to-end integration of a Convolutional Network, Deformable Parts Model and non-maximum suppression

Li Wan; David Eigen; Rob Fergus

Deformable Parts Models and Convolutional Networks each have achieved notable performance in object detection. Yet these two approaches find their strengths in complementary areas: DPMs are well-versed in object composition, modeling fine-grained spatial relationships between parts; likewise, ConvNets are adept at producing powerful image features, having been discriminatively trained directly on the pixels. In this paper, we propose a new model that combines these two approaches, obtaining the advantages of each. We train this model using a new structured loss function that considers all bounding boxes within an image, rather than isolated object instances. This enables the non-maximal suppression (NMS) operation, previously treated as a separate post-processing stage, to be integrated into the model. This allows for discriminative training of our combined Convnet + DPM + NMS model in end-to-end fashion. We evaluate our system on PASCAL VOC 2007 and 2011 datasets, achieving competitive results on both benchmarks.


international conference on learning representations | 2014

OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks

Pierre Sermanet; David Eigen; Xiang Zhang; Michael Mathieu; Rob Fergus; Yann LeCun


neural information processing systems | 2014

Depth Map Prediction from a Single Image using a Multi-Scale Deep Network

David Eigen; Christian Puhrsch; Rob Fergus


international conference on computer vision | 2015

Unsupervised Learning of Spatiotemporally Coherent Metrics

Ross Goroshin; Joan Bruna; Jonathan Tompson; David Eigen; Yann LeCun


arXiv: Learning | 2013

Understanding deep architectures using a recursive convolutional network

David Eigen; Jason Tyler Rolfe; Rob Fergus; Yann LeCun


arXiv: Computer Vision and Pattern Recognition | 2015

Unsupervised Feature Learning from Temporal Data.

Ross Goroshin; Joan Bruna; Jonathan Tompson; David Eigen; Yann LeCun


arXiv: Learning | 2014

Learning Factored Representations in a Deep Mixture of Experts

David Eigen; Marc'Aurelio Ranzato; Ilya Sutskever

Collaboration


Dive into the David Eigen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Amir Erfan Eshratifar

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Massoud Pedram

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge