Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Siddhartha Chaudhuri is active.

Publication


Featured researches published by Siddhartha Chaudhuri.


international conference on computer graphics and interactive techniques | 2012

A probabilistic model for component-based shape synthesis

Evangelos Kalogerakis; Siddhartha Chaudhuri; Daphne Koller; Vladlen Koltun

We present an approach to synthesizing shapes from complex domains, by identifying new plausible combinations of components from existing shapes. Our primary contribution is a new generative model of component-based shape structure. The model represents probabilistic relationships between properties of shape components, and relates them to learned underlying causes of structural variability within the domain. These causes are treated as latent variables, leading to a compact representation that can be effectively learned without supervision from a set of compatibly segmented shapes. We evaluate the model on a number of shape datasets with complex structural variability and demonstrate its application to amplification of shape databases and to interactive shape synthesis.


international conference on computer graphics and interactive techniques | 2011

Probabilistic reasoning for assembly-based 3D modeling

Siddhartha Chaudhuri; Evangelos Kalogerakis; Leonidas J. Guibas; Vladlen Koltun

Assembly-based modeling is a promising approach to broadening the accessibility of 3D modeling. In assembly-based modeling, new models are assembled from shape components extracted from a database. A key challenge in assembly-based modeling is the identification of relevant components to be presented to the user. In this paper, we introduce a probabilistic reasoning approach to this problem. Given a repository of shapes, our approach learns a probabilistic graphical model that encodes semantic and geometric relationships among shape components. The probabilistic model is used to present components that are semantically and stylistically compatible with the 3D model that is being assembled. Our experiments indicate that the probabilistic model increases the relevance of presented components.


international conference on computer graphics and interactive techniques | 2013

Learning part-based templates from large collections of 3D shapes

Vladimir G. Kim; Wilmot Li; Niloy J. Mitra; Siddhartha Chaudhuri; Stephen DiVerdi; Thomas A. Funkhouser

As large repositories of 3D shape collections continue to grow, understanding the data, especially encoding the inter-model similarity and their variations, is of central importance. For example, many data-driven approaches now rely on access to semantic segmentation information, accurate inter-model point-to-point correspondence, and deformation models that characterize the model collections. Existing approaches, however, are either supervised requiring manual labeling; or employ super-linear matching algorithms and thus are unsuited for analyzing large collections spanning many thousands of models. We propose an automatic algorithm that starts with an initial template model and then jointly optimizes for part segmentation, point-to-point surface correspondence, and a compact deformation model to best explain the input model collection. As output, the algorithm produces a set of probabilistic part-based templates that groups the original models into clusters of models capturing their styles and variations. We evaluate our algorithm on several standard datasets and demonstrate its scalability by analyzing much larger collections of up to thousands of shapes.


international conference on computer graphics and interactive techniques | 2015

Semantic shape editing using deformation handles

Mehmet Ersin Yumer; Siddhartha Chaudhuri; Jessica K. Hodgins; Levent Burak Kara

We propose a shape editing method where the user creates geometric deformations using a set of semantic attributes, thus avoiding the need for detailed geometric manipulations. In contrast to prior work, we focus on continuous deformations instead of discrete part substitutions. Our method provides a platform for quick design explorations and allows non-experts to produce semantically guided shape variations that are otherwise difficult to attain. We crowdsource a large set of pairwise comparisons between the semantic attributes and geometry and use this data to learn a continuous mapping from the semantic attributes to geometry. The resulting map enables simple and intuitive shape manipulations based solely on the learned attributes. We demonstrate our method on large datasets using two different user interaction modes and evaluate its usability with a set of user studies.


computer vision and pattern recognition | 2017

3D Shape Segmentation with Projective Convolutional Networks

Evangelos Kalogerakis; Melinos Averkiou; Subhransu Maji; Siddhartha Chaudhuri

This paper introduces a deep architecture for segmenting 3D objects into their labeled semantic parts. Our architecture combines image-based Fully Convolutional Networks (FCNs) and surface-based Conditional Random Fields (CRFs) to yield coherent segmentations of 3D shapes. The image-based FCNs are used for efficient view-based reasoning about 3D object parts. Through a special projection layer, FCN outputs are effectively aggregated across multiple views and scales, then are projected onto the 3D object surfaces. Finally, a surface-based CRF combines the projected outputs with geometric consistency cues to yield coherent segmentations. The whole architecture (multi-view FCNs and CRF) is trained end-to-end. Our approach significantly outperforms the existing state-of-the-art methods in the currently largest segmentation benchmark (ShapeNet). Finally, we demonstrate promising segmentation results on noisy 3D shapes acquired from consumer-grade depth cameras.


international conference on computer graphics and interactive techniques | 2014

Creating consistent scene graphs using a probabilistic grammar

Tianqiang Liu; Siddhartha Chaudhuri; Vladimir G. Kim; Qixing Huang; Niloy J. Mitra; Thomas A. Funkhouser

Growing numbers of 3D scenes in online repositories provide new opportunities for data-driven scene understanding, editing, and synthesis. Despite the plethora of data now available online, most of it cannot be effectively used for data-driven applications because it lacks consistent segmentations, category labels, and/or functional groupings required for co-analysis. In this paper, we develop algorithms that infer such information via parsing with a probabilistic grammar learned from examples. First, given a collection of scene graphs with consistent hierarchies and labels, we train a probabilistic hierarchical grammar to represent the distributions of shapes, cardinalities, and spatial relationships of semantic objects within the collection. Then, we use the learned grammar to parse new scenes to assign them segmentations, labels, and hierarchies consistent with the collection. During experiments with these algorithms, we find that: they work effectively for scene graphs for indoor scenes commonly found online (bedrooms, classrooms, and libraries); they outperform alternative approaches that consider only shape similarities and/or spatial relationships without hierarchy; they require relatively small sets of training data; they are robust to moderate over-segmentation in the inputs; and, they can robustly transfer labels from one data set to another. As a result, the proposed algorithms can be used to provide consistent hierarchies for large collections of scenes within the same semantic class.


ACM Transactions on Graphics | 2017

GRASS: generative recursive autoencoders for shape structures

Jun Li; Kai Xu; Siddhartha Chaudhuri; Ersin Yumer; Hao Zhang; Leonidas J. Guibas

We introduce a novel neural network architecture for encoding and synthesis of 3D shapes, particularly their structures. Our key insight is that 3D shapes are effectively characterized by their hierarchical organization of parts, which reflects fundamental intra-shape relationships such as adjacency and symmetry. We develop a recursive neural net (RvNN) based autoencoder to map a flat, unlabeled, arbitrary part layout to a compact code. The code effectively captures hierarchical structures of man-made 3D objects of varying structural complexities despite being fixed-dimensional: an associated decoder maps a code back to a full hierarchy. The learned bidirectional mapping is further tuned using an adversarial setup to yield a generative model of plausible structures, from which novel structures can be sampled. Finally, our structure synthesis framework is augmented by a second trained module that produces fine-grained part geometry, conditioned on global and local structural context, leading to a full generative pipeline for 3D shapes. We demonstrate that without supervision, our network learns meaningful structural hierarchies adhering to perceptual grouping principles, produces compact codes which enable applications such as shape classification and partial matching, and supports shape synthesis and interpolation with significant variations in topology and geometry.


international conference on computer graphics and interactive techniques | 2017

Learning Local Shape Descriptors from Part Correspondences with Multiview Convolutional Networks

Haibin Huang; Evangelos Kalogerakis; Siddhartha Chaudhuri; Duygu Ceylan; Vladimir G. Kim; Ersin Yumer

We present a new local descriptor for 3D shapes, directly applicable to a wide range of shape analysis problems such as point correspondences, semantic segmentation, affordance prediction, and shape-to-scan matching. The descriptor is produced by a convolutional network that is trained to embed geometrically and semantically similar points close to one another in descriptor space. The network processes surface neighborhoods around points on a shape that are captured at multiple scales by a succession of progressively zoomed-out views, taken from carefully selected camera positions. We leverage two extremely large sources of data to train our network. First, since our network processes rendered views in the form of 2D images, we repurpose architectures pretrained on massive image datasets. Second, we automatically generate a synthetic dense point correspondence dataset by nonrigid alignment of corresponding shape parts in a large collection of segmented 3D models. As a result of these design choices, our network effectively encodes multiscale local context and fine-grained surface detail. Our network can be trained to produce either category-specific descriptors or more generic descriptors by learning from multiple shape categories. Once trained, at test time, the network extracts local descriptors for shapes without requiring any part segmentation as input. Our method can produce effective local descriptors even for shapes whose category is unknown or different from the ones used while training. We demonstrate through several experiments that our learned local descriptors are more discriminative compared to state-of-the-art alternatives and are effective in a variety of shape analysis applications.


ACM Transactions on Graphics | 2017

Complementme: weakly-supervised component suggestions for 3D modeling

Minhyuk Sung; Hao Su; Vladimir G. Kim; Siddhartha Chaudhuri; Leonidas J. Guibas

Assembly-based tools provide a powerful modeling paradigm for non-expert shape designers. However, choosing a component from a large shape repository and aligning it to a partial assembly can become a daunting task. In this paper we describe novel neural network architectures for suggesting complementary components and their placement for an incomplete 3D part assembly. Unlike most existing techniques, our networks are trained on unlabeled data obtained from public online repositories, and do not rely on consistent part segmentations or labels. Absence of labels poses a challenge in indexing the database of parts for the retrieval. We address it by jointly training embedding and retrieval networks, where the first indexes parts by mapping them to a low-dimensional feature space, and the second maps partial assemblies to appropriate complements. The combinatorial nature of part arrangements poses another challenge, since the retrieval network is not a function: several complements can be appropriate for the same input. Thus, instead of predicting a single output, we train our network to predict a probability distribution over the space of part embeddings. This allows our method to deal with ambiguities and naturally enables a UI that seamlessly integrates user preferences into the design process. We demonstrate that our method can be used to design complex shapes with minimal or no user input. To evaluate our approach we develop a novel benchmark for component suggestion systems demonstrating significant improvement over state-of-the-art techniques.


symposium on geometry processing | 2016

CustomCut: on-demand extraction of customized 3D parts with 2D sketches

Xuekun Guo; Juncong Lin; Kai Xu; Siddhartha Chaudhuri; Xiaogang Jin

Several applications in shape modeling and exploration require identification and extraction of a 3D shape part matching a 2D sketch. We present CustomCut, an on‐demand part extraction algorithm. Given a sketched query, CustomCut automatically retrieves partially matching shapes from a database, identifies the region optimally matching the query in each shape, and extracts this region to produce a customized part that can be used in various modeling applications. In contrast to earlier work on sketch‐based retrieval of predefined parts, our approach can extract arbitrary parts from input shapes and does not rely on a prior segmentation into semantic components. The method is based on a novel data structure for fast retrieval of partial matches: the randomized compound k‐NN graph built on multi‐view shape projections. We also employ a coarse‐to‐fine strategy to progressively refine part boundaries down to the level of individual faces. Experimental results indicate that our approach provides an intuitive and easy means to extract customized parts from a shape database, and significantly expands the design space for the user. We demonstrate several applications of our method to shape design and exploration.

Collaboration


Dive into the Siddhartha Chaudhuri's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Evangelos Kalogerakis

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kai Xu

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hao Zhang

Simon Fraser University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Haibin Huang

University of Massachusetts Amherst

View shared research outputs
Researchain Logo
Decentralizing Knowledge