Shaofan Wang
Beijing University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Shaofan Wang.
SIAM Journal on Discrete Mathematics | 2011
Shaofan Wang; Renhong Wang; Dehui Kong; Baocai Yin
A piecewise algebraic curve is a curve determined by the zero set of a bivariate spline function. This paper gives an upper bound of the Bezout number, the maximum number of intersections between two linear piecewise algebraic curves whose intersections are finite, over arbitrary triangulations.
Journal of Zhejiang University Science C | 2016
Shaofan Wang; Chun Li; Dehui Kong; Baocai Yin
We propose a framework of hand articulation detection from a monocular depth image using curvature scale space (CSS) descriptors. We extract the hand contour from an input depth image, and obtain the fingertips and finger-valleys of the contour using the local extrema of a modified CSS map of the contour. Then we recover the undetected fingertips according to the local change of depths of points in the interior of the contour. Compared with traditional appearance-based approaches using either angle detectors or convex hull detectors, the modified CSS descriptor extracts the fingertips and finger-valleys more precisely since it is more robust to noisy or corrupted data; moreover, the local extrema of depths recover the fingertips of bending fingers well while traditional appearance-based approaches hardly work without matching models of hands. Experimental results show that our method captures the hand articulations more precisely compared with three state-of-the-art appearance-based approaches.
The Visual Computer | 2015
Shaofan Wang; Dehui Kong; Juan Xue; Weijia Zhu; Min Xu; Baocai Yin; Hubert Roth
We propose connectivity-preserving geometry images (CGIMs), which map a triangular mesh onto a rectangular regular array of an image, such that the reconstructed mesh produces no sampling errors, but merely round-off errors over the coordinates of vertices. Using permutation techniques on vertices, CGIMs first obtain a V-matrix whose elements are vertices of the original mesh, which intrinsically preserves the vertex-set and connectivity of the original mesh, and then generate a CGIM array by transforming the Cartesian coordinates of corresponding vertices of the V-matrix into RGB values. Compared with traditional geometry images (GIMs), CGIMs achieve the minimum reconstruction error with a parametrization-free algorithm. We apply CGIMs to lossy compression of meshes. Experimental results show that while CGIMs produce a lower efficiency in both encoding and decoding time and larger resolutions than traditional GIMs, CGIMs perform better peak signal-to-noise ratios and preserve details better than GIMs especially with the multi-stage base color and index map scheme, because CGIMs treat details and non-details of meshes evenly as all elements of the V-matrix.
International Journal of Digital Multimedia Broadcasting | 2017
Lina Shi; Dehui Kong; Shaofan Wang; Baocai Yin
Geometry images are a kind of completely regular remeshing methods for mesh representation. Traditional geometry images have difficulties in achieving optimal reconstruction errors and preserving manually selected geometric details, due to the limitations of parametrization methods. To solve two issues, we propose two adaptive geometry images for remeshing triangular meshes. The first scheme produces geometry images with the minimum Hausdorff error by finding the optimization direction for sampling points based on the Hausdorff distance between the original mesh and the reconstructed mesh. The second scheme produces geometry images with higher reconstruction precision over the manually selected region-of-interest of the input mesh, by increasing the number of sampling points over the region-of-interest. Experimental results show that both schemes give promising results compared with traditional parametrization-based geometry images.
Journal of Electrical and Computer Engineering | 2016
Huayang Li; Dehui Kong; Shaofan Wang; Baocai Yin
This paper proposes a two-stage method for hand depth image denoising and superresolution, using bilateral filters and learned dictionaries via noise-aware orthogonal matching pursuit (NAOMP) based K-SVD. The bilateral filtering phase recovers singular points and removes artifacts on silhouettes by averaging depth data using neighborhood pixels on which both depth difference and RGB similarity restrictions are imposed. The dictionary learning phase uses NAOMP for training dictionaries which separates faithful depth from noisy data. Compared with traditional OMP, NAOMP adds a residual reduction step which effectively weakens the noise term within the residual during the residual decomposition in terms of atoms. Experimental results demonstrate that the bilateral phase and the NAOMP-based learning dictionaries phase corporately denoise both virtual and real depth images effectively.
IEEE Transactions on Multimedia | 2016
Honglin Liu; Dehui Kong; Shaofan Wang; Baocai Yin
We propose two-dimensional pose estimation from a single range image of the human body, using sparse regression with a componentwise clustering feature point representation (CCFPR) model. CCFPR includes primary feature points and secondary feature points. The primary feature points consist of the torso center and five extremal points of human body, and further serve to classify all body pixels as the points of six body components. The secondary feature points are given by the cluster centers of each of the five components other than the torso, using K-means cluster. The human pose is obtained by learning a sparse projection matrix, which maps CCFPR to the skeleton points of human body, based on the assumption that each skeleton point be represented by a combination of a few feature points of associated body components. Experimental results on both virtual data and real data show that, under the sparse regression model with a suitably selected cluster number, CCFPR outperforms the random decision forest approach and prediction results of Kinect sensor v2 .
ICDH '14 Proceedings of the 2014 5th International Conference on Digital Home | 2014
Qianjun Wu; Shaofan Wang; Dehui Kong; Baocai Yin
The structured light techniques consisting of a light pattern with a regular structure in have been used widely for depth sensing. Traditional structured light pattern using the pattern with the fixed density and intensity is difficult to obtain the accuracy depth data. In this paper, We propose a depth sensing method of complex scenes by using a multimodal pattern consisting of structure lights with different intensities and densities. By roughly initializing depth of the scene and partitioning the scene into sub regions of different depths, we construct the pattern with multiple pseudo-random speckles, each patch of which takes suitable intensity and density with respect to a sub region of the scene. Because such a multimodal pattern decreases the blurs incurred by objects of different distances, by using only a one-shot pattern, our method proposes both more accuracy and more efficiency compared with traditional structured light patterns. Experimental results show that our method recovers better depth qualities than one-shot pseudo-random pattern.
virtual reality continuum and its applications in industry | 2011
Peng Cai; Dehui Kong; Shaofan Wang; Baocai Yin
Representation and rendering techniques of point models are important in computer graphics. The main drawbacks of current ray tracing methods for point models are coarse rendering effects in the parts of silhouettes and sharp features and high computation cost. We propose a ray tracing method for point models, based on the height difference function defined by blending projective distances between the point of ray and the corresponding splats. Our method performs high rendering qualities in particular in the parts of silhouettes and sharp features, because the height difference function precisely approximates geometric features in the joint part of splats. Moreover, the computation of intersection of our method is viewpoint-independent. The experiments show that our method performs good rendering quality and has high convergence rate.
Multimedia Tools and Applications | 2018
Bin Sun; Dehui Kong; Shaofan Wang; Lichun Wang; Yuping Wang; Baocai Yin
Isprs Journal of Photogrammetry and Remote Sensing | 2018
Yong Zhang; Bowei Shen; Shaofan Wang; Dehui Kong; Baocai Yin