Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Haibin Huang is active.

Publication


Featured researches published by Haibin Huang.


symposium on geometry processing | 2015

Analysis and synthesis of 3D shape families via deep-learned generative models of surfaces

Haibin Huang; Evangelos Kalogerakis; Benjamin M. Marlin

We present a method for joint analysis and synthesis of geometrically diverse 3D shape families. Our method first learns part‐based templates such that an optimal set of fuzzy point and part correspondences is computed between the shapes of an input collection based on a probabilistic deformation model. In contrast to previous template‐based approaches, the geometry and deformation parameters of our part‐based templates are learned from scratch. Based on the estimated shape correspondence, our method also learns a probabilistic generative model that hierarchically captures statistical relationships of corresponding surface point positions and parts as well as their existence in the input shapes. A deep learning procedure is used to capture these hierarchical relationships. The resulting generative model is used to produce control point arrangements that drive shape synthesis by combining and deforming parts from the input collection. The generative model also yields compact shape descriptors that are used to perform fine‐grained classification. Finally, it can be also coupled with the probabilistic deformation model to further improve shape correspondence. We provide qualitative and quantitative evaluations of our method for shape correspondence, segmentation, fine‐grained classification and synthesis. Our experiments demonstrate superior correspondence and segmentation results than previous state‐of‐the‐art approaches.


Computer Graphics Forum | 2014

Analogy-driven 3D style transfer

Chongyang Ma; Haibin Huang; Alla Sheffer; Evangelos Kalogerakis; Rui Wang

Style transfer aims to apply the style of an exemplar model to a target one, while retaining the targets structure. The main challenge in this process is to algorithmically distinguish style from structure, a high‐level, potentially ill‐posed cognitive task. Inspired by cognitive science research we recast style transfer in terms of shape analogies. In IQ testing, shape analogy queries present the subject with three shapes: source, target and exemplar, and ask them to select an output such that the transformation, or analogy, from the exemplar to the output is similar to that from the source to the target. The logical process involved in identifying the source‐to‐target analogies implicitly detects the structural differences between the source and target and can be used effectively to facilitate style transfer. Since the exemplar has a similar structure to the source, applying the analogy to the exemplar will provide the output we seek. The main technical challenge we address is to compute the source to target analogies, consistent with human logic. We observe that the typical analogies we look for consist of a small set of simple transformations, which when applied to the exemplar generate a continuous, seamless output model. To assemble a shape analogy, we compute an optimal set of source‐to‐target transformations, such that the assembled analogy best fits these criteria. The assembled analogy is then applied to the exemplar shape to produce the desired output model. We use the proposed framework to seamlessly transfer a variety of style properties between 2D and 3D objects and demonstrate significant improvements over the state of the art in style transfer. We further show that our framework can be used to successfully complete partial scans with the help of a user provided structural template, coherently propagating scan style across the completed surfaces.


international conference on computer vision | 2017

High-Resolution Shape Completion Using Deep Neural Networks for Global Structure and Local Geometry Inference

Xiaoguang Han; Zhen Li; Haibin Huang; Evangelos Kalogerakis; Yizhou Yu

We propose a data-driven method for recovering missing parts of 3D shapes. Our method is based on a new deep learning architecture consisting of two sub-networks: a global structure inference network and a local geometry refinement network. The global structure inference network incorporates a long short-term memorized context fusion module (LSTM-CF) that infers the global structure of the shape based on multi-view depth information provided as part of the input. It also includes a 3D fully convolutional (3DFCN) module that further enriches the global structure representation according to volumetric information in the input. Under the guidance of the global structure network, the local geometry refinement network takes as input local 3D patches around missing regions, and progressively produces a high-resolution, complete surface through a volumetric encoder-decoder architecture. Our method jointly trains the global structure inference and local geometry refinement networks in an end-to-end manner. We perform qualitative and quantitative evaluations on six object categories, demonstrating that our method outperforms existing state-of-the-art work on shape completion.


computer vision and pattern recognition | 2017

Synthesizing 3D Shapes via Modeling Multi-view Depth Maps and Silhouettes with Deep Generative Networks

Amir Arsalan Soltani; Haibin Huang; Jiajun Wu; Tejas D. Kulkarni; Joshua B. Tenenbaum

We study the problem of learning generative models of 3D shapes. Voxels or 3D parts have been widely used as the underlying representations to build complex 3D shapes, however, voxel-based representations suffer from high memory requirements, and parts-based models require a large collection of cached or richly parametrized parts. We take an alternative approach: learning a generative model over multi-view depth maps or their corresponding silhouettes, and using a deterministic rendering function to produce 3D shapes from these images. A multi-view representation of shapes enables generation of 3D models with fine details, as 2D depth maps and silhouettes can be modeled at a much higher resolution than 3D voxels. Moreover, our approach naturally brings the ability to recover the underlying 3D representation from depth maps of one or a few viewpoints. Experiments show that our framework can generate 3D shapes with variations and details. We also demonstrate that our model has out-of-sample generalization power for real-world tasks with occluded objects.


IEEE Transactions on Visualization and Computer Graphics | 2017

Shape Synthesis from Sketches via Procedural Models and Convolutional Networks

Haibin Huang; Evangelos Kalogerakis; Ersin Yumer; Radomir Mech

Procedural modeling techniques can produce high quality visual content through complex rule sets. However, controlling the outputs of these techniques for design purposes is often notoriously difficult for users due to the large number of parameters involved in these rule sets and also their non-linear relationship to the resulting content. To circumvent this problem, we present a sketch-based approach to procedural modeling. Given an approximate and abstract hand-drawn 2D sketch provided by a user, our algorithm automatically computes a set of procedural model parameters, which in turn yield multiple, detailed output shapes that resemble the users input sketch. The user can then select an output shape, or further modify the sketch to explore alternative ones. At the heart of our approach is a deep Convolutional Neural Network (CNN) that is trained to map sketches to procedural model parameters. The network is trained by large amounts of automatically generated synthetic line drawings. By using an intuitive medium, i.e., freehand sketching as input, users are set free from manually adjusting procedural model parameters, yet they are still able to create high quality content. We demonstrate the accuracy and efficacy of our method in a variety of procedural modeling scenarios including design of man-made and organic shapes.


international conference on computer graphics and interactive techniques | 2017

Learning Local Shape Descriptors from Part Correspondences with Multiview Convolutional Networks

Haibin Huang; Evangelos Kalogerakis; Siddhartha Chaudhuri; Duygu Ceylan; Vladimir G. Kim; Ersin Yumer

We present a new local descriptor for 3D shapes, directly applicable to a wide range of shape analysis problems such as point correspondences, semantic segmentation, affordance prediction, and shape-to-scan matching. The descriptor is produced by a convolutional network that is trained to embed geometrically and semantically similar points close to one another in descriptor space. The network processes surface neighborhoods around points on a shape that are captured at multiple scales by a succession of progressively zoomed-out views, taken from carefully selected camera positions. We leverage two extremely large sources of data to train our network. First, since our network processes rendered views in the form of 2D images, we repurpose architectures pretrained on massive image datasets. Second, we automatically generate a synthetic dense point correspondence dataset by nonrigid alignment of corresponding shape parts in a large collection of segmented 3D models. As a result of these design choices, our network effectively encodes multiscale local context and fine-grained surface detail. Our network can be trained to produce either category-specific descriptors or more generic descriptors by learning from multiple shape categories. Once trained, at test time, the network extracts local descriptors for shapes without requiring any part segmentation as input. Our method can produce effective local descriptors even for shapes whose category is unknown or different from the ones used while training. We demonstrate through several experiments that our learned local descriptors are more discriminative compared to state-of-the-art alternatives and are effective in a variety of shape analysis applications.


international conference on computer graphics and interactive techniques | 2017

Learning to group discrete graphical patterns

Zhaoliang Lun; Changqing Zou; Haibin Huang; Evangelos Kalogerakis; Ping Tan; Marie-Paule Cani; Hao Zhang

We introduce a deep learning approach for grouping discrete patterns common in graphical designs. Our approach is based on a convolutional neural network architecture that learns a grouping measure defined over a pair of pattern elements. Motivated by perceptual grouping principles, the key feature of our network is the encoding of element shape, context, symmetries, and structural arrangements. These element properties are all jointly considered and appropriately weighted in our grouping measure. To better align our measure with human perceptions for grouping, we train our network on a large, human-annotated dataset of pattern groupings consisting of patterns at varying granularity levels, with rich element relations and varieties, and tempered with noise and other data imperfections. Experimental results demonstrate that our deep-learned measure leads to robust grouping results.


arXiv: Computer Vision and Pattern Recognition | 2018

Deep Part Induction from Articulated Object Pairs

Li Yi; Haibin Huang; Difan Liu; Evangelos Kalogerakis; Hao Su; Leonidas J. Guibas

Object functionality is often expressed through part articulation - as when the two rigid parts of a scissor pivot against each other to perform the cutting function. Such articulations are often similar across objects within the same functional category. In this paper we explore how the observation of different articulation states provides evidence for part structure and motion of 3D objects. Our method takes as input a pair of unsegmented shapes representing two different articulation states of two functionally related objects, and induces their common parts along with their underlying rigid motion. This is a challenging setting, as we assume no prior shape structure, no prior shape category information, no consistent shape orientation, the articulation states may belong to objects of different geometry, plus we allow inputs to be noisy and partial scans, or point clouds lifted from RGB images. Our method learns a neural network architecture with three modules that respectively propose correspondences, estimate 3D deformation flows, and perform segmentation. To achieve optimal performance, our architecture alternates between correspondence, deformation flow, and segmentation prediction iteratively in an ICP-like fashion. Our results demonstrate that our method significantly outperforms state-of-the-art techniques in the task of discovering articulated parts of objects. In addition, our part induction is object-class agnostic and successfully generalizes to new and unseen objects.


international conference on computer graphics and interactive techniques | 2012

Point sampling with general noise spectrum

Yahan Zhou; Haibin Huang; Li-Yi Wei; Rui Wang


arXiv: Computer Vision and Pattern Recognition | 2017

Large-Scale 3D Shape Reconstruction and Segmentation from ShapeNet Core55.

Li Yi; Lin Shao; Manolis Savva; Haibin Huang; Yang Zhou; Qirui Wang; Benjamin Graham; Martin Engelcke; Roman Klokov; Victor S. Lempitsky; Yuan Gan; Pengyu Wang; Kun Liu; Fenggen Yu; Panpan Shui; Bingyang Hu; Yan Zhang; Yangyan Li; Rui Bu; Mingchao Sun; Wei Wu; Minki Jeong; Jaehoon Choi; Changick Kim; Angom Geetchandra; Narasimha Murthy; Bhargava Ramu; Bharadwaj Manda; M. Ramanathan; Gautam Kumar

Collaboration


Dive into the Haibin Huang's collaboration.

Top Co-Authors

Avatar

Evangelos Kalogerakis

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Li Yi

Stanford University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Siddhartha Chaudhuri

Indian Institute of Technology Bombay

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Benjamin M. Marlin

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar

Chongyang Ma

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Difan Liu

University of Massachusetts Amherst

View shared research outputs
Researchain Logo
Decentralizing Knowledge