Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gengdai Liu is active.

Publication


Featured researches published by Gengdai Liu.


Expert Systems With Applications | 2012

Constructing the virtual Jing-Hang Grand Canal with onto-draw

Yong Liu; Minming Zhang; Feng Tang; Yunliang Jiang; Zhigeng Pan; Gengdai Liu; Huaqing Shen

Constructing virtual 3D historical scenes from literature and records is a very challenging problem due to the difficulty in incorporating different types of domain knowledge into the modeling system. The domain knowledge comes from different experts, including: architects, historians, rendering artists, user interface designers and computer engineers. In this paper we investigate the problem of automatically generating drawings of ancient scenes by ontologies extracted from these domains. We introduce a framework called onto-draw to generate semantic models of desired scenes by constructing hierarchical ontology concept domains. Inconsistencies among them are resolved via an iterative refinement algorithm. We implement the onto-draw based ontology design approach and inconsistency removal technique in the virtual Jing-Hang Grand Canal construction project (Chen et al., 2010) and achieve encouraging results.


Computer Animation and Virtual Worlds | 2011

Human motion generation with multifactor models

Gengdai Liu; Mingliang Xu; Zhigeng Pan; Abdennour El Rhalibi

To generate human motions with various specific attributes is a difficult task because of high dimensionality and complexity of human motions. This paper presents a novel human motion model for generating and editing motions with multiple factors. A set of motions performed by several actors with various styles was captured for constructing a well‐structured motion database. Subsequently, MICA (multilinear independent component analysis) model that combines ICA and conventional multilinear framework was adopted for the construction of a multifactor model. With this model, new motions can be synthesized by interpolation and through solving optimization problems for the specific factors. Our method offers a practical solution to edit stylistic human motions in a parametric space learnt with MICA model. We demonstrated the power of our method by generating and editing sideways stepping, reaching, and striding over obstructions using different actors with various styles. The experimental results show that our method can be used for interactive stylistic motion synthesis and editing. Copyright


Computer Graphics Forum | 2010

L4RW: Laziness‐based Realistic Real‐time Responsive Rebalance in Walking

Mingliang Xu; Huansen Li; Pei Lv; Wenzhi Chen; Gengdai Liu; Pengyu Zhu; Zhigeng Pan

We present a novel L4RW (Laziness‐based Realistic Real‐time Responsive Rebalance in Walking) technique to synthesize 4RW animations under unexpected external perturbations with minimal locomotion effort. We first devise a lazy dynamic rebalance model, which specifies the dynamic balance conditions, defines the rebalance effort, and selects the suitable rebalance strategy automatically using the laziness law after an unexpected perturbation. Based on the model, L4RW searches over a motion capture (mocap) database for an appropriate motion segment to follow, and the transition‐to motions is generated by interpolating the active response dynamic motion. A support vector machine (SVM) based training, classification, and predication algorithm is applied to reduce the search space, and it is trained offline only once. Our algorithm classifies the mocap database into many rebalance strategy‐specified subsets and then online predicts responsive motions in the subset according to the selected strategy. The rebalance effort, the ‘extrapolated center of mass’ (XCoM) and environment constraints are selected as feature attributes for the SVM feature vector. Furthermore, the subsets segments are sorted through the rebalance effort, then our algorithm searches for an acceptable segment starting from the least‐effort segment. Compared with previous methods, our search increases speed by over two orders of magnitude, and our algorithm creates more realistic and smooth 4RW animation.


Journal of Computer Applications in Technology | 2010

A reactive and protective character motion generation algorithm

Xi Cheng; Gengdai Liu; Zhigeng Pan

By combining motion capture and dynamic simulation, character animation obtains realistic visual effect, and the character can respond to unexpected contact forces. To generate character animation in real-time, this paper introduces an artificial neural network to predict a subset of the return-to motion capture database. In the simulation phase, we propose an effective balance detection method and our controller can drive the characters taking protective actions when they fall onto the ground. Compared with other methods, our algorithm can run in real-time and can be used in interactive character applications.


intelligent virtual agents | 2009

Motion Synthesis Using Style-Editable Inverse Kinematics

Gengdai Liu; Zhigeng Pan; Ling Li

In this paper, a new low-dimensional motion model that can parameterize human motion style is presented. Based on this model, a human motion synthesis approach by using constrainted optimization in a low-dimensional space is proposed. We define a new inverse kinematics solver in this low-dimensional space to generate the required motions meeting user-defined space constraints at key-frames. Our approach can also allow users to edit motion style explicitly by specifying the style parameter. The experimental results demonstrate the effectiveness of this approach which can be used for interactive motion editing.


Journal of Computer Applications in Technology | 2010

A stylistic human motion editing system based on a subspace motion model

Gengdai Liu; Zhigeng Pan; Xi Cheng

In this paper, a stylistic human motion editing system is presented. This system concentrates on extracting and editing motion styles from motion pairs. A new low-dimensional motion model is introduced into this system. In this model, motion style is defined quantitatively as a low-dimensional subspace of motion data and can be obtained easily from a motion pair. This system provides several style editing tools and accessorial modules based on the proposed motion model. Using these tools and modules, animators can not only translate the style of the original motions but also transfer and add styles between two motions.


The Visual Computer | 2009

Fragment-based responsive character motion for interactive games

Xi Cheng; Gengdai Liu; Zhigeng Pan; Bing Tang

Fragment-based character animation has become popular in recent years. By stringing appropriate motion capture fragments together, the system drives characters responding to the control signals of the user and generates realistic character motions. In this paper, we propose a novel, straightforward and fast method to build the control policy table, which selects the next motion fragment to play based on the current user’s input and the previous motion fragment. During the synthesis of the control policy table, we cluster similar fragments together to create several fragment classes. Dynamic programming is employed to generate the training samples based on the control signals of the user. Finally, we use a supervised learning routine to create the tabular control policy. We demonstrate the efficacy of our method by comparing the motions generated by our controller to the optimal controller and other previous controllers. The results indicate that although a reinforcement learning algorithm known as value iteration also creates the tabular control policy, it is more complex and requires more expensive space–time cost in synthesis of the control policy table. Our approach is simple but efficient, and is practical for interactive character games.


The Visual Computer | 2009

Real time falling animation with active and protective responses

Zhigeng Pan; Xi Cheng; Wenzhi Chen; Gengdai Liu; Bing Tang

Combined with motion capture and dynamic simulation, characters in animation have realistic motion details and can respond to unexpected contact forces. This paper proposes a novel and real-time character motion generation approach which introduces a parallel process, and uses an approximate nearest neighbor optimization search method. Besides, we employ a support vector machine (SVM), which is trained on a set of samples and predicts a subset of our ‘return-to’ motion capture (mocap) database in order to reduce the search time. In the dynamic simulation process, we focus on designing a biomechanics based controller which detects the balance of the characters in locomotion and drives them to take several active and protective responses when they fall to the ground in order to reduce the injuries to their bodies. Finally, we show the time costs in synthesis and the visual results of our approach. The experimental results indicate that our motion generation approach is suitable for interactive games or other real-time applications.


international conference on virtual reality | 2007

Design of water transportation story for Grand Canal museum based on multi-projection screens

Linqiang Chen; Gengdai Liu; Zhigeng Pan; Zhi Li

This paper presents a method for cultural heritage exhibition. A design scheme of multi-projection screens based digital storytelling system for the Grand Canal Museum in China is introduced. The task of this system is to exhibit a famous Chinese painting on a large display wall dynamically. In order to display this painting in a large display wall, the painting have to be repainted with very high resolution and segmented into many smaller tiles for being projected onto the large wall. Additionally, in order to make this static painting attractive and charming, the technology of digital storytelling is used. So a script called water transportation story is composed elaborately by us to show the magnificent scene along the Grand Canal.


IEEE Computer Graphics and Applications | 2010

Animations, Games, and Virtual Reality for the Jing-Hang Grand Canal

Wenzhi Chen; Mingmin Zhang; Zhigeng Pan; Gengdai Liu; Huaqing Shen; Shengnan Chen; Yong Liu

Collaboration


Dive into the Gengdai Liu's collaboration.

Top Co-Authors

Avatar

Zhigeng Pan

Hangzhou Normal University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge