Tianjia Shao
Zhejiang University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Tianjia Shao.
international conference on computer graphics and interactive techniques | 2012
Tianjia Shao; Weiwei Xu; Kun Zhou; Jingdong Wang; Dongping Li; Baining Guo
We present an interactive approach to semantic modeling of indoor scenes with a consumer-level RGBD camera. Using our approach, the user first takes an RGBD image of an indoor scene, which is automatically segmented into a set of regions with semantic labels. If the segmentation is not satisfactory, the user can draw some strokes to guide the algorithm to achieve better results. After the segmentation is finished, the depth data of each semantic region is used to retrieve a matching 3D model from a database. Each model is then transformed according to the image depth to yield the scene. For large scenes where a single image can only cover one part of the scene, the user can take multiple images to construct other parts of the scene. The 3D models built for all images are then transformed and unified into a complete scene. We demonstrate the efficiency and robustness of our approach by modeling several real-world scenes.
international conference on computer graphics and interactive techniques | 2010
Libin Liu; KangKang Yin; Michiel van de Panne; Tianjia Shao; Weiwei Xu
Human motions are the product of internal and external forces, but these forces are very difficult to measure in a general setting. Given a motion capture trajectory, we propose a method to reconstruct its open-loop control and the implicit contact forces. The method employs a strategy based on randomized sampling of the control within user-specified bounds, coupled with forward dynamics simulation. Sampling-based techniques are well suited to this task because of their lack of dependence on derivatives, which are difficult to estimate in contact-rich scenarios. They are also easy to parallelize, which we exploit in our implementation on a compute cluster. We demonstrate reconstruction of a diverse set of captured motions, including walking, running, and contact rich tasks such as rolls and kip-up jumps. We further show how the method can be applied to physically based motion transformation and retargeting, physically plausible motion variations, and reference-trajectory-free idling motions. Alongside the successes, we point out a number of limitations and directions for future work.
Computer Graphics Forum | 2011
Tianjia Shao; Weiwei Xu; KangKang Yin; Jingdong Wang; Kun Zhou; Baining Guo
We propose a sketch‐based 3D shape retrieval system that is substantially more discriminative and robust than existing systems, especially for complex models. The power of our system comes from a combination of a contour‐based 2D shape representation and a robust sampling‐based shape matching scheme. They are defined over discriminative local features and applicable for partial sketches; robust to noise and distortions in hand drawings; and consistent when strokes are added progressively. Our robust shape matching, however, requires dense sampling and registration and incurs a high computational cost. We thus devise critical acceleration methods to achieve interactive performance: precomputing kNN graphs that record transformations between neighboring contour images and enable fast online shape alignment; pruning sampling and shape registration strategically and hierarchically; and parallelizing shape matching on multi‐core platforms or GPUs. We demonstrate the effectiveness of our system through various experiments, comparisons, and user studies.
international conference on computer graphics and interactive techniques | 2016
Chen Cao; Hongzhi Wu; Yanlin Weng; Tianjia Shao; Kun Zhou
We present a novel image-based representation for dynamic 3D avatars, which allows effective handling of various hairstyles and headwear, and can generate expressive facial animations with fine-scale details in real-time. We develop algorithms for creating an image-based avatar from a set of sparsely captured images of a user, using an off-the-shelf web camera at home. An optimization method is proposed to construct a topologically consistent morphable model that approximates the dynamic hair geometry in the captured images. We also design a real-time algorithm for synthesizing novel views of an image-based avatar, so that the avatar follows the facial motions of an arbitrary actor. Compelling results from our pipeline are demonstrated on a variety of cases.
international conference on computer graphics and interactive techniques | 2014
Tianjia Shao; Aron Monszpart; Youyi Zheng; Bongjin Koo; Weiwei Xu; Kun Zhou; Niloy J. Mitra
Missing data due to occlusion is a key challenge in 3D acquisition, particularly in cluttered man-made scenes. Such partial information about the scenes limits our ability to analyze and understand them. In this work we abstract such environments as collections of cuboids and hallucinate geometry in the occluded regions by globally analyzing the physical stability of the resultant arrangements of the cuboids. Our algorithm extrapolates the cuboids into the un-seen regions to infer both their corresponding geometric attributes (e.g., size, orientation) and how the cuboids topologically interact with each other (e.g., touch or fixed). The resultant arrangement provides an abstraction for the underlying structure of the scene that can then be used for a range of common geometry processing tasks. We evaluate our algorithm on a large number of test scenes with varying complexity, validate the results on existing benchmark datasets, and demonstrate the use of the recovered cuboid-based structures towards object retrieval, scene completion, etc.
international conference on computer graphics and interactive techniques | 2013
Tianjia Shao; Wilmot Li; Kun Zhou; Weiwei Xu; Baining Guo; Niloy J. Mitra
Concept sketches are popularly used by designers to convey pose and function of products. Understanding such sketches, however, requires special skills to form a mental 3D representation of the product geometry by linking parts across the different sketches and imagining the intermediate object configurations. Hence, the sketches can remain inaccessible to many, especially non-designers. We present a system to facilitate easy interpretation and exploration of concept sketches. Starting from crudely specified incomplete geometry, often inconsistent across the different views, we propose a globally-coupled analysis to extract part correspondence and inter-part junction information that best explain the different sketch views. The user can then interactively explore the abstracted object to gain better understanding of the product functions. Our key technical contribution is performing shape analysis without access to any coherent 3D geometric model by reasoning in the space of inter-part relations. We evaluate our system on various concept sketches obtained from popular product design books and websites.
international conference on computer graphics and interactive techniques | 2016
Menglei Chai; Tianjia Shao; Hongzhi Wu; Yanlin Weng; Kun Zhou
We introduce AutoHair, the first fully automatic method for 3D hair modeling from a single portrait image, with no user interaction or parameter tuning. Our method efficiently generates complete and high-quality hair geometries, which are comparable to those generated by the state-of-the-art methods, where user interaction is required. The core components of our method are: a novel hierarchical deep neural network for automatic hair segmentation and hair growth direction estimation, trained over an annotated hair image database; and an efficient and automatic data-driven hair matching and modeling algorithm, based on a large set of 3D hair exemplars. We demonstrate the efficacy and robustness of our method on Internet photos, resulting in a database of around 50K 3D hair models and a corresponding hairstyle space that covers a wide variety of real-world hairstyles. We also show novel applications enabled by our method, including 3D hairstyle space navigation and hair-aware image retrieval.
Computational Visual Media | 2016
Yuliang Rong; Youyi Zheng; Tianjia Shao; Yin Yang; Kun Zhou
Inferring the functionality of an object from a single RGBD image is difficult for two reasons: lack of semantic information about the object, and missing data due to occlusion. In this paper, we present an interactive framework to recover a 3D functional prototype from a single RGBD image. Instead of precisely reconstructing the object geometry for the prototype, we mainly focus on recovering the object’s functionality along with its geometry. Our system allows users to scribble on the image to create initial rough proxies for the parts. After user annotation of high-level relations between parts, our system automatically jointly optimizes detailed joint parameters (axis and position) and part geometry parameters (size, orientation, and position). Such prototype recovery enables a better understanding of the underlying image geometry and allows for further physically plausible manipulation. We demonstrate our framework on various indoor objects with simple or hybrid functions.
international conference on computer graphics and interactive techniques | 2016
Tianjia Shao; Dongping Li; Yuliang Rong; Changxi Zheng; Kun Zhou
We present a technique for parsing widely used furniture assembly instructions, and reconstructing the 3D models of furniture components and their dynamic assembly process. Our technique takes as input a multi-step assembly instruction in a vector graphic format and starts to group the vector graphic primitives into semantic elements representing individual furniture parts, mechanical connectors (e.g., screws, bolts and hinges), arrows, visual highlights, and numbers. To reconstruct the dynamic assembly process depicted over multiple steps, our system identifies previously built 3D furniture components when parsing a new step, and uses them to address the challenge of occlusions while generating new 3D components incrementally. With a wide range of examples covering a variety of furniture types, we demonstrate the use of our system to animate the 3D furniture assembly process and, beyond that, the semantic-aware furniture editing as well as the fabrication of personalized furnitures.
IEEE Transactions on Visualization and Computer Graphics | 2018
Minmin Lin; Tianjia Shao; Youyi Zheng; Niloy J. Mitra; Kun Zhou
This paper presents a method to reconstruct a functional mechanical assembly from raw scans. Given multiple input scans of a mechanical assembly, our method first extracts the functional mechanical parts using a motion-guided, patch-based hierarchical registration and labeling algorithm. The extracted functional parts are then parameterized from the segments and their internal mechanical relations are encoded by a graph. We use a joint optimization to solve for the best geometry, placement, and orientation of each part, to obtain a final workable mechanical assembly. We demonstrated our algorithm on various types of mechanical assemblies with diverse settings and validated our output using physical fabrication.