Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jiejie Zhu is active.

Publication


Featured researches published by Jiejie Zhu.


computer vision and pattern recognition | 2008

Fusion of time-of-flight depth and stereo for high accuracy depth maps

Jiejie Zhu; Liang Wang; Ruigang Yang; James Davis

Time-of-flight range sensors have error characteristics which are complementary to passive stereo. They provide real time depth estimates in conditions where passive stereo does not work well, such as on white walls. In contrast, these sensors are noisy and often perform poorly on the textured scenes for which stereo excels. We introduce a method for combining the results from both methods that performs better than either alone. A depth probability distribution function from each method is calculated and then merged. In addition, stereo methods have long used global methods such as belief propagation and graph cuts to improve results, and we apply these methods to this sensor. Since time-of-flight devices have primarily been used as individual sensors, they are typically poorly calibrated. We introduce a method that substantially improves upon the manufacturerpsilas calibration. We show that these techniques lead to improved accuracy and robustness.


Time-of-Flight and Depth Imaging | 2013

A Survey on Human Motion Analysis from Depth Data

Mao Ye; Qing Zhang; Liang Wang; Jiejie Zhu; Ruigang Yang; Juergen Gall

Human pose estimation has been actively studied for decades. While traditional approaches rely on 2d data like images or videos, the development of Time-of-Flight cameras and other depth sensors created new opportunities to advance the field. We give an overview of recent approaches that perform human motion analysis which includes depth-based and skeleton-based activity recognition, head pose estimation, facial feature detection, facial performance capture, hand pose estimation and hand gesture recognition. While the focus is on approaches using depth data, we also discuss traditional image based methods to provide a broad overview of recent developments in these areas.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2011

Reliability Fusion of Time-of-Flight Depth and Stereo Geometry for High Quality Depth Maps

Jiejie Zhu; Liang Wang; Ruigang Yang; James Davis; Zhigeng Pan

Time-of-flight range sensors have error characteristics, which are complementary to passive stereo. They provide real-time depth estimates in conditions where passive stereo does not work well, such as on white walls. In contrast, these sensors are noisy and often perform poorly on the textured scenes where stereo excels. We explore their complementary characteristics and introduce a method for combining the results from both methods that achieve better accuracy than either alone. In our fusion framework, the depth probability distribution functions from each of these sensor modalities are formulated and optimized. Robust and adaptive fusion is built on a pixel-wise reliability weighting function calculated for each method. In addition, since time-of-flight devices have primarily been used as individual sensors, they are typically poorly calibrated. We introduce a method that substantially improves upon the manufacturers calibration. We demonstrate that our proposed techniques lead to improved accuracy and robustness on an extensive set of experimental results.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2010

Spatial-Temporal Fusion for High Accuracy Depth Maps Using Dynamic MRFs

Jiejie Zhu; Liang Wang; Jizhou Gao; Ruigang Yang

Time-of-flight range sensors and passive stereo have complimentary characteristics in nature. To fuse them to get high accuracy depth maps varying over time, we extend traditional spatial MRFs to dynamic MRFs with temporal coherence. This new model allows both the spatial and the temporal relationship to be propagated in local neighbors. By efficiently finding a maximum of the posterior probability using Loopy Belief Propagation, we show that our approach leads to improved accuracy and robustness of depth estimates for dynamic scenes.


computer vision and pattern recognition | 2009

Joint depth and alpha matte optimization via fusion of stereo and time-of-flight sensor

Jiejie Zhu; Miao Liao; Ruigang Yang; Zhigeng Pan

We present a new approach to iteratively estimate both high-quality depth map and alpha matte from a single image or a video sequence. Scene depth, which is invariant to illumination changes, color similarity and motion ambiguity, provides a natural and robust cue for foreground/ background segmentation - a prerequisite for matting. The image mattes, on the other hand, encode rich information near boundaries where either passive or active sensing method performs poorly. We develop a method to combine the complementary nature of scene depth and alpha matte to mutually enhance their qualities. We formulate depth inference as a global optimization problem where information from passive stereo, active range sensor and matte is merged. The depth map is used in turn to enhance the matting. In addition, we extend this approach to video matting by incorporating temporal coherence, which reduces flickering in the composite video. We show that these techniques lead to improved accuracy and robustness for both static and dynamic scenes.


Computers & Graphics | 2005

Interactive learning of CG in networked virtual environments

Zhigeng Pan; Jiejie Zhu; Weihua Hu; Hung Pak Lun; Xin Zhou

Nowadays, computer graphics courses are usually taught with traditional teaching methodologies and tools, which have several limitations. In this paper, we present an online interactive computer graphics (CG) tutorial, which supports the collaborative learning of the concepts and algorithms of computer graphics in a networked virtual environment. The practical part of the tutorial consists of example programs for users to test the theoretical concepts and to obtain their own experiments. The integration of the different theoretical and practical parts of the course is realized through a common Web-based interface to the system. The main objectives are to (1) allow the users to learn and practice with the algorithms of computer graphics in a virtual environment, (2) support avatar-based collaborative learning based on avatars involving, several users connected through the Internet. Two versions of the system in Chinese and English language, respectively, have been implemented and are described in the paper.


international conference on web based learning | 2005

Collaborative virtual learning environment using synthetic characters

Zhigeng Pan; Jiejie Zhu; Mingrning Zhang; Weihua Hu

This research work not only proposes a deep insight to initiative and vivid concept modeling which makes use of several techniques like flash animation, rolling image-based introduction, virtual experiment, etc., but also explores the potential integration of synthetic characters with virtual learning environment to better simulate the social interaction and social awareness. Through analysis of constructivist learning theory, we present a new learning strategy with pedagogical agent. Based on this strategy, we take CG course in practice as an example to implement our multi-user application for individual learning and collaborative learning. Experiment results show that learning attraction, especially some difficult concept understanding, is very inspiring, and learning result is hopefully improved.


international conference on e-learning and games | 2008

Virtual Avatar Enhanced Nonverbal Communication from Mobile Phones to PCs

Jiejie Zhu; Zhigeng Pan; Guilin Xu; Hongwei Yang; David Cheok

Nonverbal communication is a special kind of communication using wordless messages such as gesture, body language, posture, facial expression and eye contact. Such communications are specially attractive in virtual environments (VEs) which incorporating 3D avatars. Many of techniques for nonverbal communication in VEs have been studied and reported. However, transferring existing techniques to mobile platform are seldom reported. In this paper, we introduce our approach of creating a nonverbal communication environment between mobile phone and normal PCs. 3D face modeling is taken as an example to explain the system architecture. This modeling process is integrated with 3 platforms. The prior knowledge of modeling only uses one front view image which can be captured by built-in phone camera without high quality constrain. The two ends,between phone to phone or phone to PC, can download models from server and share the communication environment. Key techniques such as facial features detecting, face model personalizing are presented and experiment results show a lifelike face-to-face conversation can be simulated.


intelligent virtual agents | 2003

Exploring an Agent-Driven 3D Learning Environment for Computer Graphics Education

Weihua Hu; Jiejie Zhu; Zhigeng Pan

Nowadays, computer graphics courses are usually taught with traditional teaching methodologies and tools. In our work we propose an approach for an agent-driven 3D learning environment for Computer Graphics (CG) education.To provide efficient method for CG education, we create an interactive virtual learning environment with guiding avatars for 3D CG courses. The main feature of our system is that in a 3D learning environment, with agent-driven scheme, students actively learn CG course in an immersive way. Experiments show that agent-driven 3D learning environments provide an effective way for knowledge acquisition in a wide range of applications.


Computers & Graphics | 2006

Virtual reality and mixed reality for virtual learning environments

Zhigeng Pan; Adrian David Cheok; Hongwei Yang; Jiejie Zhu; Jiaoying Shi

Collaboration


Dive into the Jiejie Zhu's collaboration.

Top Co-Authors

Avatar

Zhigeng Pan

Hangzhou Normal University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Weihua Hu

Hangzhou Dianzi University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James Davis

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge