Xiangyong Zeng
Chinese Academy of Sciences
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Xiangyong Zeng.
international conference on computer vision | 2005
Peng Lu; Yufeng Chen; Xiangyong Zeng
The appeal of computer games may be enhanced by vision-based user inputs. The high speed and low cost requirements for near-term, mass-market game applications make system design challenging. In this paper we propose a vision based 3D racing car game controlling method, which analyzes two fists positions of the player in video stream from the camera to get the direction commands of the racing car. n nThis paper especially focuses on the robust and real-time Bayesian network (BN) based multi-cue fusion fist tracking method. Firstly, a new strategy, which employs the latest work in face recognition, is used to create accurate color model of the fist automatically. Secondly, color cue and motion cue are used to generate the possible position of the fist. Then, the posterior probability of each possible position is evaluated by BN, which fuses color cue and appearance cue. Finally, the fist position is approximated by the hypothesis that maximizes a posterior. Based on the proposed control system, a racing car game, “Simulation Drive”, has been developed by our group. Through the game an entirely new experience can be obtained by the player.
computer graphics, imaging and visualization | 2004
Mandun Zhang; Linna Ma; Xiangyong Zeng
We present an image-based 3D face modeling algorithm. Different from traditional complex stereo vision procedure, our new method needs only two orthogonal images for fast 3D modeling without any camera calibration. The proposed method has two steps. Firstly according to MPEG-4 protocol for 3D face structure, we appoint and deform feature points by radial basis functions (RBF) in the input images corresponding to the generic model. Then the texture mapping is carried out with regard to different directional projections. The experiments demonstrate that our new algorithm can photo-realistically render 3D face with very limited computation.
international conference on image and graphics | 2004
Peng Lu; Xiangyong Zeng; Xiangsheng Huang
Keyboards, mice, and joy sticks are the most popular controlling and navigation devices in current 3D game. However, they are quite unnatural. In this paper, we propose a novel scheme to estimate the users head pose and the estimation result is used for navigating in game. The novel scheme based on Markov model fusing appearance, color and motion information. The experimental results demonstrate that the proposed approach which is used for 3D game controlling, is real-time and robust, and can give players more immersiveness.
computer graphics, imaging and visualization | 2005
Mandun Zhang; Xiangyong Zeng; Peng Lu
In this paper, we present a 3D head models retrieval algorithm based on geometrical measurement for modeling. The proposed method mainly has two steps. Firstly we create a dataset of generic models and gain their feature points, which are used to identify horizontal and vertical proportions. After extraction of individual frontal facial feature points, the most similar generic model can be chosen according to similarity criterion. Secondly these feature points are devoted to deform the chosen generic model with corresponding feature points using Radial Basis functions (RBF), and texture mapping makes it more realistic. The experiment results demonstrate that our new algorithm can retrieve the most similar model to reduce deformation error and photo- realistically render 3D face.
international conference on entertainment computing | 2005
Yufeng Chen; Mandun Zhang; Peng Lu; Xiangyong Zeng
An novel stereo vision tracking method is proposed to implement an interactive Human Computer Interface(HCI). Firstly, a feature detection method is introduced to accurately obtain the location and orientation of the feature in an efficient way. Secondly, a searching method is carried out, which uses probability in the time, frequency or color space to optimize the searching strategy. Then the 3D information is retrieved by the calibration and triangulation process. Up to 5 degrees of freedom(DOFs) can be achieved from a single feature, compared with the other methods, including the coordinates in 3D space and the orientation information. Experiments show that the method is efficient and robust for the real time game interface.
international conference on entertainment computing | 2005
Xiangyong Zeng; Jian Yao; Mandun Zhang
We have developed a fast generation system for personalized 3D face model and plan to apply it in network 3D games. This system uses one video camera to capture player’s frontal face image for 3D modeling and dose not need calibration and plentiful manual tuning. The 3D face model in games is represented by a 3D geometry mesh and a 2D texture image. The personalized geometry mesh is obtained by deforming an original mesh with the relative positions of the player’s facial features, which are automatically detected from the frontal image. The relevant texture image is also obtained from the same image. In order to save storage space and network bandwidth, only the feature data and texture data from each player are sent to the game server and then to other clients. Finally, players can see their own faces in multiplayer games.
computer graphics, imaging and visualization | 2005
Jian Yao; Mandun Zhang; Xiangyong Zeng
We present a fast approach to generate individual face model and to produce relevant facial animation. Firstly, a frontal face image is captured with an ordinary video camera, and 85 key points are automatically detected. Then the points are used for RBF network to deform a generic model to an individual model. The texture is generated from the same image and texture coordinates are interpolated. Finally we transfer the original vertex motion vectors to the individual model in real time. All the procedure can be implemented with minimal or even without manual tuning. The result shows that our method is fast enough for common applications.
Archive | 2006
Mantun Zhang; Xiangsheng Huang; Xiangyong Zeng
Archive | 2012
Jian Yao; Xiangyong Zeng; Zhijun Du
Archive | 2008
Peng Lu; Xiangyong Zeng; Yufeng Chen; Shuchang Wang