Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nathan Silberman is active.

Publication


Featured researches published by Nathan Silberman.


european conference on computer vision | 2012

Indoor segmentation and support inference from RGBD images

Nathan Silberman; Derek Hoiem; Pushmeet Kohli; Rob Fergus

We present an approach to interpret the major surfaces, objects, and support relations of an indoor scene from an RGBD image. Most existing work ignores physical interactions or is applied only to tidy rooms and hallways. Our goal is to parse typical, often messy, indoor scenes into floor, walls, supporting surfaces, and object regions, and to recover support relationships. One of our main interests is to better understand how 3D cues can best inform a structured 3D interpretation. We also contribute a novel integer programming formulation to infer physical support relations. We offer a new dataset of 1449 RGBD images, capturing 464 diverse indoor scenes, with detailed annotations. Our experiments demonstrate our ability to infer support relations in complex scenes and verify that our 3D scene cues and inferred support lead to better object segmentation.


international conference on computer vision | 2011

Indoor scene segmentation using a structured light sensor

Nathan Silberman; Rob Fergus

In this paper we explore how a structured light depth sensor, in the form of the Microsoft Kinect, can assist with indoor scene segmentation. We use a CRF-based model to evaluate a range of different representations for depth information and propose a novel prior on 3D location. We introduce a new and challenging indoor scene dataset, complete with accurate depth maps and dense label coverage. Evaluating our model on this dataset reveals that the combination of depth and intensity images gives dramatic performance gains over intensity images alone. Our results clearly demonstrate the utility of structured light sensors for scene understanding.


computer vision and pattern recognition | 2017

Unsupervised Pixel-Level Domain Adaptation with Generative Adversarial Networks

Konstantinos Bousmalis; Nathan Silberman; David Dohan; Dumitru Erhan; Dilip Krishnan

Collecting well-annotated image datasets to train modern machine learning algorithms is prohibitively expensive for many tasks. One appealing alternative is rendering synthetic data where ground-truth annotations are generated automatically. Unfortunately, models trained purely on rendered images fail to generalize to real images. To address this shortcoming, prior work introduced unsupervised domain adaptation algorithms that have tried to either map representations between the two domains, or learn to extract features that are domain-invariant. In this work, we approach the problem in a new light by learning in an unsupervised manner a transformation in the pixel space from one domain to the other. Our generative adversarial network (GAN)-based method adapts source-domain images to appear as if drawn from the target domain. Our approach not only produces plausible samples, but also outperforms the state-of-the-art on a number of unsupervised domain adaptation scenarios by large margins. Finally, we demonstrate that the adaptation process generalizes to object classes unseen during training.


international conference on computer vision | 2015

Im2Calories: Towards an Automated Mobile Vision Food Diary

Austin Myers; Nick Johnston; Vivek Rathod; Anoop Korattikara; Alexander N. Gorban; Nathan Silberman; Sergio Guadarrama; George Papandreou; Jonathan Huang; Kevin P. Murphy

We present a system which can recognize the contents of your meal from a single image, and then predict its nutritional contents, such as calories. The simplest version assumes that the user is eating at a restaurant for which we know the menu. In this case, we can collect images offline to train a multi-label classifier. At run time, we apply the classifier (running on your phone) to predict which foods are present in your meal, and we lookup the corresponding nutritional facts. We apply this method to a new dataset of images from 23 different restaurants, using a CNN-based classifier, significantly outperforming previous work. The more challenging setting works outside of restaurants. In this case, we need to estimate the size of the foods, as well as their labels. This requires solving segmentation and depth / volume estimation from a single image. We present CNN-based approaches to these problems, with promising preliminary results.


european conference on computer vision | 2014

Instance Segmentation of Indoor Scenes Using a Coverage Loss

Nathan Silberman; David Sontag; Rob Fergus

A major limitation of existing models for semantic segmentation is the inability to identify individual instances of the same class: when labeling pixels with only semantic classes, a set of pixels with the same label could represent a single object or ten. In this work, we introduce a model to perform both semantic and instance segmentation simultaneously. We introduce a new higher-order loss function that directly minimizes the coverage metric and evaluate a variety of region features, including those from a convolutional network. We apply our model to the NYU Depth V2 dataset, obtaining state of the art results.


european conference on computer vision | 2014

A Contour Completion Model for Augmenting Surface Reconstructions

Nathan Silberman; Lior Shapira; Ran Gal; Pushmeet Kohli

The availability of commodity depth sensors such as Kinect has enabled development of methods which can densely reconstruct arbitrary scenes. While the results of these methods are accurate and visually appealing, they are quite often incomplete. This is either due to the fact that only part of the space was visible during the data capture process or due to the surfaces being occluded by other objects in the scene. In this paper, we address the problem of completing and refining such reconstructions. We propose a method for scene completion that can infer the layout of the complete room and the full extent of partially occluded objects. We propose a new probabilistic model, Contour Completion Random Fields, that allows us to complete the boundaries of occluded surfaces. We evaluate our method on synthetic and real world reconstructions of 3D scenes and show that it quantitatively and qualitatively outperforms standard methods. We created a large dataset of partial and complete reconstructions which we will make available to the community as a benchmark for the scene completion task. Finally, we demonstrate the practical utility of our algorithm via an augmented-reality application where objects interact with the completed reconstructions inferred by our method.


neural information processing systems | 2009

Efficient Large-Scale Distributed Training of Conditional Maximum Entropy Models

Ryan T. McDonald; Mehryar Mohri; Nathan Silberman; Dan Walker; Gideon S. Mann


neural information processing systems | 2016

Domain separation networks

Konstantinos Bousmalis; George Trigeorgis; Nathan Silberman; Dilip Krishnan; Dumitru Erhan


national conference on artificial intelligence | 2010

Case for automated detection of diabetic retinopathy

Nathan Silberman; Kristy Ahrlich; Rob Fergus; Lakshminarayanan Subramanian


british machine vision conference | 2017

The Devil is in the Decoder.

Zbigniew Wojna; Jasper R. R. Uijlings; Sergio Guadarrama; Nathan Silberman; Liang-chieh Chen; Alireza Fathi; Vittorio Ferrari

Collaboration


Dive into the Nathan Silberman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dan Walker

Brigham Young University

View shared research outputs
Researchain Logo
Decentralizing Knowledge