Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bin Zhou is active.

Publication


Featured researches published by Bin Zhou.


The Visual Computer | 2013

Deformable model for estimating clothed and naked human shapes from a single image

Xiaowu Chen; Yu Guo; Bin Zhou; Qinping Zhao

Estimation of human shape from images has numerous applications ranging from graphics to surveillance. A single image provides insufficient constraints (e.g. clothing), making human shape estimation more challenging. We propose a method to simultaneously estimate a person’s clothed and naked shapes from a single image of that person wearing clothing. The key component of our method is a deformable model of clothed human shape. We learn our deformable model, which spans variations in pose, body, and clothes, from a training dataset. These variations are derived by the non-rigid surface deformation, and encoded in various low-dimension parameters. Our deformable model can be used to produce clothed 3D meshes for different people in different poses, which neither appears in the training dataset. Afterward, given an input image, our deformable model is initialized with a few user-specified 2D joints and contours of the person. We optimize the parameters of the deformable model by pose fitting and body fitting in an iterative way. Then the clothed and naked 3D shapes of the person can be obtained simultaneously. We illustrate our method for texture mapping and animation. The experimental results on real images demonstrate the effectiveness of our method.


Computer Graphics Forum | 2013

Garment Modeling from a Single Image

Bin Zhou; Xiaowu Chen; Qiang Fu; Kan Guo; Ping Tan

Modeling of realistic garments is essential for online shopping and many other applications including virtual characters. Most of existing methods either require a multi-camera capture setup or a restricted mannequin pose. We address the garment modeling problem according to a single input image. We design an all-pose garment outline interpretation, and a shading-based detail modeling algorithm. Our method first estimates the mannequin pose and body shape from the input image. It further interprets the garment outline with an oriented facet decided according to the mannequin pose to generate the initial 3D garment model. Shape details such as folds and wrinkles are modeled by shape-from-shading techniques, to improve the realism of the garment model. Our method achieves similar result quality as prior methods from just a single image, significantly improving the flexibility of garment modeling.


international conference on computer graphics and interactive techniques | 2015

Garment modeling with a depth camera

Xiaowu Chen; Bin Zhou; Feixiang Lu; Lin Wang; Lang Bi; Ping Tan

Previous garment modeling techniques mainly focus on designing novel garments to dress up virtual characters. We study the modeling of real garments and develop a system that is intuitive to use even for novice users. Our system includes garment component detectors and design attribute classifiers learned from a manually labeled garment image database. In the modeling time, we scan the garment with a Kinect and build a rough shape by KinectFusion from the raw RGBD sequence. The detectors and classifiers will identify garment components (e.g. collar, sleeve, pockets, belt, and buttons) and their design attributes (e.g. falbala collar or lapel collar, hubble-bubble sleeve or straight sleeve) from the RGB images. Our system also contains a 3D deformable template database for garment components. Once the components and their designs are determined, we choose appropriate templates, stitch them together, and fit them to the initial garment mesh generated by KinectFusion. Experiments on various different garment styles consistently generate high quality results.


Science in China Series F: Information Sciences | 2014

Structure guided texture inpainting through multi-scale patches and global optimization for image completion

Xiaowu Chen; Bin Zhou; Yu Guo; Fang Xu; Qinping Zhao

Automatic image completion can provide convenient editing of consumer images. Most image completion methods find the best patch locally and then copy it to the hole region for texture inpainting. Since the best patch is fixed in size, it is hard to adapt these methods efficiently either to various patterns or to content synthesis. Meanwhile, salient image structures can be estimated and propagated to guide the texture inpainting process for more plausible results. This paper presents a novel image completion method using structure guided texture inpainting. We do not require any interactions to achieve automatic two-stage image completion. In the structure completion stage, the salient structures around the hole region are detected, and then the missing structure curves are completed with Euler spiral. The textures along the structure curves are completed through belief propagation. In the texture inpainting stage, we propose multi-scale patches and global optimization to inpaint the remaining texture in the hole regions guided by the completed structures. First, with defined patch sizes, the hole region is divided into lattice patches, making it possible for multiple patch sizes to render multiscale descriptions of this image. A multi-scale graph is then built for the hole region and formulated as a posterior probability model. Second, using a simulated annealing based Markov chain Monte Carlo method, an inference algorithm is designed to find a global optimization solution for the posterior probability model. The experiments show that our method can automatically complete the hole region and preserve plausible structure shapes of existing ones in various scenarios. The texture inpainting results are more convincing with guidance from the completed structures, and our method can guarantee and accelerate convergence of the global optimization.


International Journal of Software Engineering and Knowledge Engineering | 2010

AUTOMATIC IMAGE COMPLETION WITH STRUCTURE PROPAGATION AND TEXTURE SYNTHESIS

Xiaowu Chen; Bin Zhou; Fang Xu; Qinping Zhao

In this paper, we present a novel automatic image completion solution in a greedy manner inspired by a primal sketch representation model. Firstly, an image is divided into structure (sketchable) components and texture (non-sketchable) components, and the missing structures, such as curves and corners, are predicted by tensor voting. Secondly, the textures along structural sketches are synthesized with the sampled patches of some known structure components. Then, using the texture completion priorities decided by the confidence term, data term and distance term, the similar image patches of some known texture components are found by selecting a point with the maximum priority on the boundary of hole region. Finally, these image patches inpaint the missing textures of hole region seamlessly through graph cuts. The characteristics of this solution include: (1) introducing the primal sketch representation model to guide completion for visual consistency; (2) achieving fully automatic completion. The experiments on natural images illustrate satisfying image completion results.


international conference on computer graphics and interactive techniques | 2017

Adaptive synthesis of indoor scenes via activity-associated object relation graphs

Qiang Fu; Xiaowu Chen; Xiaotian Wang; Sijia Wen; Bin Zhou; Hongbo Fu

We present a system for adaptive synthesis of indoor scenes given an empty room and only a few object categories. Automatically suggesting indoor objects and proper layouts to convert an empty room to a 3D scene is challenging, since it requires interior design knowledge to balance the factors like space, path distance, illumination and object relations, in order to insure the functional plausibility of the synthesized scenes. We exploit a database of 2D floor plans to extract object relations and provide layout examples for scene synthesis. With the labeled human positions and directions in each plan, we detect the activity relations and compute the coexistence frequency of object pairs to construct activity-associated object relation graphs. Given the input room and user-specified object categories, our system first leverages the object relation graphs and the database floor plans to suggest more potential object categories beyond the specified ones to make resulting scenes functionally complete, and then uses the similar plan references to create the layout of synthesized scenes. We show various synthesis results to demonstrate the practicability of our system, and validate its usability via a user study. We also compare our system with the state-of-the-art furniture layout and activity-centric scene representation methods, in terms of functional plausibility and user friendliness.


international conference on virtual reality and visualization | 2014

Single-View Dressed Human Modeling via Morphable Template

Lin Wang; Kai Jiang; Bin Zhou; Qiang Fu; Kan Guo; Xiaowu Chen

We introduce a morphable template for dressed human modeling, called Morphable Dressed Human (MDH) template. The template is obtained by learning basic functions from two common types of clothes (long shirts and trousers) and defining a deform-combine manipulation. Our MDH template is parametric, spans variation in both clothes and body/pose, and can generate various dressed human shapes. The template can be morphed to perform dressed human modeling from just a single image through shape fitting and deformation/texture transfer. We demonstrate the effectiveness of our MDH template in single-view dressed human estimation and garment transfer.


The Visual Computer | 2018

Efficiently consistent affinity propagation for 3D shapes co-segmentation

Xiaogang Wang; Bin Zhou; Zongji Wang; Dongqing Zou; Xiaowu Chen; Qinping Zhao

Unsupervised co-segmentation for a set of 3D shapes is a challenging problem as no prior information is provided. The accuracy of the current approaches is necessarily restricted by the accuracy of the unsupervised face classification, which is used to provide an initialization for the following optimization to improve the consistency between adjacent faces. However, it is exceedingly difficult to obtain a satisfactory initialization pre-segmentation owing to variation in topology and geometry of 3D shapes. In this study, we consider the unsupervised 3D shape co-segmentation as an exemplar-based clustering problem, aimed at simultaneously discovering optimal exemplars and obtaining co-segmentation results. Therefore, we introduce a novel exemplar-based clustering method based on affinity propagation for 3D shape co-segmentation, which can automatically identify representative exemplars and patterns in 3D shapes considering the high-order statistics, yielding consistent and accurate co-segmentation results. Experiments using various datasets, especially large sets with 200 or more shapes that would be challenging to manually segment, demonstrate that our method exhibits a better performance compared to state-of-the-art methods.


The Visual Computer | 2018

Real-time 3D scene reconstruction with dynamically moving object using a single depth camera

Feixiang Lu; Bin Zhou; Yu Zhang; Qinping Zhao

Online 3D reconstruction of real-world scenes has been attracting increasing interests from both the academia and industry, especially with the consumer-level depth cameras becoming widely available. Recent most online reconstruction systems take live depth data from a moving Kinect camera and incrementally fuse them to a single high-quality 3D model in real time. Although most real-world scenes have static environment, the daily objects in a scene often move dynamically, which are non-trivial to reconstruct especially when the camera is also not still. To solve this problem, we propose a single depth camera-based real-time approach for simultaneous reconstruction of dynamic object and static environment, and provide solutions for its key issues. In particular, we first introduce a robust optimization scheme which takes advantage of raycasted maps to segment moving object and background from the live depth map. The corresponding depth data are then fused to the volumes, respectively. These volumes are raycasted to extract views of the implicit surface which can be used as a consistent reference frame for the next iteration of segmentation and tracking. Particularly, in order to handle fast motion of dynamic object and handheld camera in the fusion stage, we propose a sequential 6D pose prediction method which largely increases the registration robustness and avoids registration failures occurred in conventional methods. Experimental results show that our approach can reconstruct moving object as well as static environment with rich details, and outperform conventional methods in multiple aspects.


Science in China Series F: Information Sciences | 2018

3D shape co-segmentation via sparse and low rank representations

Liyuan Yin; Kan Guo; Bin Zhou; Qinping Zhao

In this paper, we propose a 3D shape co-segmentation method, which divides 3D shapes in the same category into consistent feature representations. We involve sparse and low-rank constraints to obtain compact feature representations among the 3D shapes. After pre-segmentation and feature extraction processes, we convert the co-segmentation problem into feature clustering issues. With the sparse and low-rank constraints, the initial geometry features are mapped into a compact coefficient space. Then, we gather the coefficients and weight them by a confidence weighting procedure. Finally, we apply fuzzy cuts method for optimization and achieve the final shape co-segmentation results. Experimental results on two public benchmarks demonstrate that our approach is robust for various 3D meshes, and outperforms other state-of-the-art approaches.

Collaboration


Dive into the Bin Zhou's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ping Tan

Simon Fraser University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge