Mandun Zhang
Chinese Academy of Sciences
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mandun Zhang.
computer graphics, imaging and visualization | 2005
Peng Lu; Mandun Zhang; Xinshan Zhu
Head gestures such as nodding and shaking are often used as one of human body languages for communication with each other, and their recognition plays an important role in the advancement of human-computer interaction. As head gesture is the continuous motion on the sequential time series, the key problems of recognition are to track multi-view head and understand the head pose switch. This paper presents a novel approach to recognize the nod and shake in the interactive computer environment. First, head poses are detected by multi-view model (MVM) and then hidden Markov model (HMM) are used as head gesture statistic inference model for gesture recognition. Experimental results show that the approach is effective and real time.
International Journal of Wavelets, Multiresolution and Information Processing | 2007
Yufeng Chen; Mandun Zhang; Peng Lu
A novel statistical approach that involves differential shape is proposed to analyze contour segments. First, a moment-based algorithm to represent the differential contour segment in an efficient way is introduced. Then, a curvature mean-shift method is adopted to search for the salient features. An optimized function is also developed to segment a contour into parts based on its structural properties. Compared with some other methods used in CSS (Curvature Scale Space) and shock graphs, our method is more powerful for shape contour analysis, especially for the incomplete or occluded contours. Experiments show that our method can track salient parts in real-time and give a judgment of the basic shape properties such as symmetry.
computer graphics, imaging and visualization | 2004
Mandun Zhang; Linna Ma; Xiangyong Zeng
We present an image-based 3D face modeling algorithm. Different from traditional complex stereo vision procedure, our new method needs only two orthogonal images for fast 3D modeling without any camera calibration. The proposed method has two steps. Firstly according to MPEG-4 protocol for 3D face structure, we appoint and deform feature points by radial basis functions (RBF) in the input images corresponding to the generic model. Then the texture mapping is carried out with regard to different directional projections. The experiments demonstrate that our new algorithm can photo-realistically render 3D face with very limited computation.
computer graphics, imaging and visualization | 2005
Mandun Zhang; Xiangyong Zeng; Peng Lu
In this paper, we present a 3D head models retrieval algorithm based on geometrical measurement for modeling. The proposed method mainly has two steps. Firstly we create a dataset of generic models and gain their feature points, which are used to identify horizontal and vertical proportions. After extraction of individual frontal facial feature points, the most similar generic model can be chosen according to similarity criterion. Secondly these feature points are devoted to deform the chosen generic model with corresponding feature points using Radial Basis functions (RBF), and texture mapping makes it more realistic. The experiment results demonstrate that our new algorithm can retrieve the most similar model to reduce deformation error and photo- realistically render 3D face.
computer graphics, imaging and visualization | 2005
Yufeng Chen; Mandun Zhang; Peng Lu
A new local moment invariant analysis (LMIA) method of images is proposed in this paper. Considering the background contrast, light condition and the camera parameters, the proposed method achieves the invariant under main imaging effect based on the model of the feature image transformation. Our method is specially designed not only for geometric properties, compared with the other kind of moments, and also more describable than some of the simple feature detectors. Also an efficient local moment computing algorithm is brought forward to deal with the large amount of computation, that can search the image almost in real-time. Experimental results show that the proposed approach is effective and suitable for the local feature detection.
international conference on image and graphics | 2007
Mandun Zhang; Zhi Li; Hongtao Wu; Ming Yu
In this paper, an image-based fast 3D facial modeling algorithm is presented. Different from traditional complex stereo vision procedure, our new method needs only one frontal image for fast 3D modeling without any camera calibration. To extract frontal feature points effectively, we propose an improved Active Shape Models (ASM) method. We can calculate the lateral parameters and estimate the depth information of facial feature points. According to MPEG-4 protocol for 3D face structure, we appoint and deform feature points by Radial Basis functions (RBF) corresponding to the generic model. The texture mapping is carried out with regard to different directional projections to generate the individual face.
international conference on intelligent networks and intelligent systems | 2009
Mandun Zhang; Na Lu; Ming Yu; Xuefeng Zhou
A new method using feature subspace for writer identification is proposed in this paper. The current writer identification algorithms are always that the more the extraction features the better the classifier result. However it will result in excessive calculational cost on classifier identification process. On the basis of these issues, after we obtain the high-dimensional features, first we extract the more useful features to compose of the subspace, and then carry on the identification process. It shows that this new method, compared with the classical method, not only achieves better identification results but also greatly reduces the elapsed time on computation of the identification process.
international conference on entertainment computing | 2005
Yufeng Chen; Mandun Zhang; Peng Lu; Xiangyong Zeng
An novel stereo vision tracking method is proposed to implement an interactive Human Computer Interface(HCI). Firstly, a feature detection method is introduced to accurately obtain the location and orientation of the feature in an efficient way. Secondly, a searching method is carried out, which uses probability in the time, frequency or color space to optimize the searching strategy. Then the 3D information is retrieved by the calibration and triangulation process. Up to 5 degrees of freedom(DOFs) can be achieved from a single feature, compared with the other methods, including the coordinates in 3D space and the orientation information. Experiments show that the method is efficient and robust for the real time game interface.
international conference on entertainment computing | 2005
Xiangyong Zeng; Jian Yao; Mandun Zhang
We have developed a fast generation system for personalized 3D face model and plan to apply it in network 3D games. This system uses one video camera to capture player’s frontal face image for 3D modeling and dose not need calibration and plentiful manual tuning. The 3D face model in games is represented by a 3D geometry mesh and a 2D texture image. The personalized geometry mesh is obtained by deforming an original mesh with the relative positions of the player’s facial features, which are automatically detected from the frontal image. The relevant texture image is also obtained from the same image. In order to save storage space and network bandwidth, only the feature data and texture data from each player are sent to the game server and then to other clients. Finally, players can see their own faces in multiplayer games.
computer graphics, imaging and visualization | 2005
Jian Yao; Mandun Zhang; Xiangyong Zeng
We present a fast approach to generate individual face model and to produce relevant facial animation. Firstly, a frontal face image is captured with an ordinary video camera, and 85 key points are automatically detected. Then the points are used for RBF network to deform a generic model to an individual model. The texture is generated from the same image and texture coordinates are interpolated. Finally we transfer the original vertex motion vectors to the individual model in real time. All the procedure can be implemented with minimal or even without manual tuning. The result shows that our method is fast enough for common applications.