Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Charles Ruizhongtai Qi is active.

Publication


Featured researches published by Charles Ruizhongtai Qi.


international conference on computer vision | 2015

Render for CNN: Viewpoint Estimation in Images Using CNNs Trained with Rendered 3D Model Views

Hao Su; Charles Ruizhongtai Qi; Yangyan Li; Leonidas J. Guibas

Object viewpoint estimation from 2D images is an essential task in computer vision. However, two issues hinder its progress: scarcity of training data with viewpoint annotations, and a lack of powerful features. Inspired by the growing availability of 3D models, we propose a framework to address both issues by combining render-based image synthesis and CNNs (Convolutional Neural Networks). We believe that 3D models have the potential in generating a large number of images of high variation, which can be well exploited by deep CNN with a high learning capacity. Towards this goal, we propose a scalable and overfit-resistant image synthesis pipeline, together with a novel CNN specifically tailored for the viewpoint estimation task. Experimentally, we show that the viewpoint estimation from our pipeline can significantly outperform state-of-the-art methods on PASCAL 3D+ benchmark.


computer vision and pattern recognition | 2016

Volumetric and Multi-view CNNs for Object Classification on 3D Data

Charles Ruizhongtai Qi; Hao Su; Matthias NieBner; Angela Dai; Mengyuan Yan; Leonidas J. Guibas

3D shape models are becoming widely available and easier to capture, making available 3D information crucial for progress in object classification. Current state-of-theart methods rely on CNNs to address this problem. Recently, we witness two types of CNNs being developed: CNNs based upon volumetric representations versus CNNs based upon multi-view representations. Empirical results from these two types of CNNs exhibit a large gap, indicating that existing volumetric CNN architectures and approaches are unable to fully exploit the power of 3D representations. In this paper, we aim to improve both volumetric CNNs and multi-view CNNs according to extensive analysis of existing approaches. To this end, we introduce two distinct network architectures of volumetric CNNs. In addition, we examine multi-view CNNs, where we introduce multiresolution filtering in 3D. Overall, we are able to outperform current state-of-the-art methods for both volumetric CNNs and multi-view CNNs. We provide extensive experiments designed to evaluate underlying design choices, thus providing a better understanding of the space of methods available for object classification on 3D data.


computer vision and pattern recognition | 2017

Shape Completion Using 3D-Encoder-Predictor CNNs and Shape Synthesis

Angela Dai; Charles Ruizhongtai Qi; Matthias NieBner

We introduce a data-driven approach to complete partial 3D shapes through a combination of volumetric deep neural networks and 3D shape synthesis. From a partially-scanned input shape, our method first infers a low-resolution – but complete – output. To this end, we introduce a 3D-Encoder-Predictor Network (3D-EPN) which is composed of 3D convolutional layers. The network is trained to predict and fill in missing data, and operates on an implicit surface representation that encodes both known and unknown space. This allows us to predict global structure in unknown areas at high accuracy. We then correlate these intermediary results with 3D geometry from a shape database at test time. In a final pass, we propose a patch-based 3D shape synthesis method that imposes the 3D geometry from these retrieved shapes as constraints on the coarsely-completed mesh. This synthesis process enables us to reconstruct fine-scale detail and generate high-resolution output while respecting the global mesh structure obtained by the 3D-EPN. Although our 3D-EPN outperforms state-of-the-art completion method, the main contribution in our work lies in the combination of a data-driven shape predictor and analytic 3D shape synthesis. In our results, we show extensive evaluations on a newly-introduced shape completion benchmark for both real-world and synthetic data.


2015 IEEE Signal Processing and Signal Processing Education Workshop (SP/SPE) | 2015

Teaching digital signal processing with Stanford's Lab-in-a-Box

Fernando A. Mujica; William J. Esposito; Alex Gonzalez; Charles Ruizhongtai Qi; Chris Vassos; Maisy Wieman; Reggie Wilcox; Gregory T. A. Kovacs; Ronald W. Schafer

This paper describes our efforts to include a hands-on component in the teaching of core concepts of digital signal processing. The basis of our approach was the low-cost and open-source “Stanford Lab in a Box.” This system, with its easy to use Arduino-like programming interface allowed students to see how fundamental DSP concepts such as digital filters, FFT, and multi-rate processing can be implemented in real time on a fixed-point processor. The paper describes how the Lab in a Box was used to provide a new dimension to the teaching of DSP.


neural information processing systems | 2017

PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space

Charles Ruizhongtai Qi; Li Yi; Hao Su; Leonidas J. Guibas


international conference on computer graphics and interactive techniques | 2015

Joint embeddings of shapes and images via CNN image purification

Yangyan Li; Hao Su; Charles Ruizhongtai Qi; Noa Fish; Daniel Cohen-Or; Leonidas J. Guibas


neural information processing systems | 2016

FPNN: Field Probing Neural Networks for 3D Data

Yangyan Li; Sören Pirk; Hao Su; Charles Ruizhongtai Qi; Leonidas J. Guibas


computer vision and pattern recognition | 2018

Frustum PointNets for 3D Object Detection From RGB-D Data

Charles Ruizhongtai Qi; Wei Liu; Chenxia Wu; Hao Su; Leonidas J. Guibas


international conference on machine learning | 2018

Exploring Hidden Dimensions in Parallelizing Convolutional Neural Networks.

Zhihao Jia; Sina Lin; Charles Ruizhongtai Qi; Alex Aiken


international conference on machine learning | 2018

Exploring the Hidden Dimension in Accelerating Convolutional Neural Networks

Zhihao Jia; Sina Lin; Charles Ruizhongtai Qi; Alex Aiken

Collaboration


Dive into the Charles Ruizhongtai Qi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hao Su

Stanford University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge