Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christopher Bongsoo Choy is active.

Publication


Featured researches published by Christopher Bongsoo Choy.


european conference on computer vision | 2016

3D-R2N2: A Unified Approach for Single and Multi-view 3D Object Reconstruction

Christopher Bongsoo Choy; Danfei Xu; JunYoung Gwak; Kevin Chen; Silvio Savarese

Inspired by the recent success of methods that employ shape priors to achieve robust 3D reconstructions, we propose a novel recurrent neural network architecture that we call the 3D Recurrent Reconstruction Neural Network (3D-R2N2). The network learns a mapping from images of objects to their underlying 3D shapes from a large collection of synthetic data [13]. Our network takes in one or more images of an object instance from arbitrary viewpoints and outputs a reconstruction of the object in the form of a 3D occupancy grid. Unlike most of the previous works, our network does not require any image annotations or object class labels for training or testing. Our extensive experimental analysis shows that our reconstruction framework (i) outperforms the state-of-the-art methods for single view reconstruction, and (ii) enables the 3D reconstruction of objects in situations when traditional SFM/SLAM methods fail (because of lack of texture and/or wide baseline).


computer vision and pattern recognition | 2017

Scene Graph Generation by Iterative Message Passing

Danfei Xu; Yuke Zhu; Christopher Bongsoo Choy; Li Fei-Fei

Understanding a visual scene goes beyond recognizing individual objects in isolation. Relationships between objects also constitute rich semantic information about the scene. In this work, we explicitly model the objects and their relationships using scene graphs, a visually-grounded graphical structure of an image. We propose a novel end-to-end model that generates such structured scene representation from an input image. Our key insight is that the graph generation problem can be formulated as message passing between the primal node graph and its dual edge graph. Our joint inference model can take advantage of contextual cues to make better predictions on objects and their relationships. The experiments show that our model significantly outperforms previous methods on the Visual Genome dataset as well as support relation inference in NYU Depth V2 dataset.


european conference on computer vision | 2016

ObjectNet3D: A Large Scale Database for 3D Object Recognition

Yu Xiang; Wonhui Kim; Wei Chen; Jingwei Ji; Christopher Bongsoo Choy; Hao Su; Roozbeh Mottaghi; Leonidas J. Guibas; Silvio Savarese

We contribute a large scale database for 3D object recognition, named ObjectNet3D, that consists of 100 categories, 90,127 images, 201,888 objects in these images and 44,147 3D shapes. Objects in the 2D images in our database are aligned with the 3D shapes, and the alignment provides both accurate 3D pose annotation and the closest 3D shape annotation for each 2D object. Consequently, our database is useful for recognizing the 3D pose and 3D shape of objects from 2D images. We also provide baseline experiments on four tasks: region proposal generation, 2D object detection, joint 2D detection and 3D object pose estimation, and image-based 3D shape retrieval, which can serve as baselines for future research using our database. Our database is available online at http://cvgl.stanford.edu/projects/objectnet3d.


computer vision and pattern recognition | 2017

DESIRE: Distant Future Prediction in Dynamic Scenes with Interacting Agents

Namhoon Lee; Wongun Choi; Paul Vernaza; Christopher Bongsoo Choy; Philip H. S. Torr; Manmohan Chandraker

We introduce a Deep Stochastic IOC RNN Encoder-decoder framework, DESIRE, for the task of future predictions of multiple interacting agents in dynamic scenes. DESIRE effectively predicts future locations of objects in multiple scenes by 1) accounting for the multi-modal nature of the future prediction (i.e., given the same context, future may vary), 2) foreseeing the potential future outcomes and make a strategic prediction based on that, and 3) reasoning not only from the past motion history, but also from the scene context as well as the interactions among the agents. DESIRE achieves these in a single end-to-end trainable neural network model, while being computationally efficient. The model first obtains a diverse set of hypothetical future prediction samples employing a conditional variational auto-encoder, which are ranked and refined by the following RNN scoring-regression module. Samples are scored by accounting for accumulated future rewards, which enables better long-term strategic decisions similar to IOC frameworks. An RNN scene context fusion module jointly captures past motion histories, the semantic scene context and interactions among multiple agents. A feedback mechanism iterates over the ranking and refinement to further boost the prediction accuracy. We evaluate our model on two publicly available datasets: KITTI and Stanford Drone Dataset. Our experiments show that the proposed model significantly improves the prediction accuracy compared to other baseline methods.


computer vision and pattern recognition | 2015

Enriching object detection with 2D-3D registration and continuous viewpoint estimation

Christopher Bongsoo Choy; Michael Stark; Sam Corbett-Davies; Silvio Savarese

A large body of recent work on object detection has focused on exploiting 3D CAD model databases to improve detection performance. Many of these approaches work by aligning exact 3D models to images using templates generated from renderings of the 3D models at a set of discrete viewpoints. However, the training procedures for these approaches are computationally expensive and require gigabytes of memory and storage, while the viewpoint discretization hampers pose estimation performance. We propose an efficient method for synthesizing templates from 3D models that runs on the fly - that is, it quickly produces detectors for an arbitrary viewpoint of a 3D model without expensive dataset-dependent training or template storage. Given a 3D model and an arbitrary continuous detection viewpoint, our method synthesizes a discriminative template by extracting features from a rendered view of the object and decorrelating spatial dependences among the features. Our decorrelation procedure relies on a gradient-based algorithm that is more numerically stable than standard decomposition-based procedures, and we efficiently search for candidate detections by computing FFT-based template convolutions. Due to the speed of our template synthesis procedure, we are able to perform joint optimization of scale, translation, continuous rotation, and focal length using Metropolis-Hastings algorithm. We provide an efficient GPU implementation of our algorithm, and we validate its performance on 3D Object Classes and PASCAL3D+ datasets.


neural information processing systems | 2016

UNIVERSAL CORRESPONDENCE NETWORK

Christopher Bongsoo Choy; JunYoung Gwak; Silvio Savarese; Manmohan Chandraker


international conference on 3d vision | 2017

Weakly Supervised 3D Reconstruction with Adversarial Constraint

JunYoung Gwak; Christopher Bongsoo Choy; Manmohan Chandraker; Animesh Garg; Silvio Savarese


international conference on 3d vision | 2017

SEGCloud: Semantic Segmentation of 3D Point Clouds

Lyne P. Tchapmi; Christopher Bongsoo Choy; Iro Armeni; JunYoung Gwak; Silvio Savarese


Archive | 2017

Weakly Supervised Generative Adversarial Networks for 3D Reconstruction.

JunYoung Gwak; Christopher Bongsoo Choy; Animesh Garg; Manmohan Chandraker; Silvio Savarese


workshop on applications of computer vision | 2018

DeformNet: Free-Form Deformation Network for 3D Shape Reconstruction from a Single Image

Andrey Kurenkov; Jingwei Ji; Animesh Garg; Viraj Mehta; JunYoung Gwak; Christopher Bongsoo Choy; Silvio Savarese

Collaboration


Dive into the Christopher Bongsoo Choy's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Animesh Garg

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hao Su

Stanford University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge