Liwei Zhao
University of Pennsylvania
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Liwei Zhao.
international conference on computer graphics and interactive techniques | 2000
Diane M. Chi; Monica Costa; Liwei Zhao; Norman I. Badler
Human movements include limb gestures and postural attitude. Although many computer animation researchers have studied these classes of movements, procedurally generated movements still lack naturalness. We argue that looking only at the psychological notion of gesture is insufficient to capture movement qualities needed by animated charactes. We advocate that the domain of movement observation science, specifically Laban Movement Analysis (LMA) and its Effort and Shape components, provides us with valuable parameters for the form and execution of qualitative aspects of movements. Inspired by some tenets shared among LMA proponents, we also point out that Effort and Shape phrasing across movements and the engagement of the whole body are essential aspects to be considered in the search for naturalness in procedurally generated gestures. Finally, we present EMOTE (Expressive MOTion Engine), a 3D character animation system that applies Effort and Shape qualities to independently defined underlying movements and thereby generates more natural synthetic gestures.
conference of the association for machine translation in the americas | 2000
Liwei Zhao; Karin Kipper; William Schuler; Christian Vogler; Norman I. Badler; Martha Palmer
Research in computational linguistics, computer graphics and autonomous agents has led to the development of increasingly sophisticated communicative agents over the past few years, bringing new perspective to machine translation research. The engineering of language-based smooth, expressive, natural-looking human gestures can give us useful insights into the design principles that have evolved in natural communication between people. In this paper we prototype a machine translation system from English to American Sign Language (ASL), taking into account not only linguistic but also visual and spatial information associated with ASL signs.
IEEE Computer Graphics and Applications | 2000
Tsukasa Noma; Liwei Zhao; Norman I. Badler
We have created a virtual human presenter who accepts speech texts with embedded commands as inputs. The presenter acts in real-time 3D animation synchronized with speech. The system was developed on the Jack animated-agent system. Jack provides a 3D graphical environment for controlling articulated figures, including detailed human models.
Graphical Models \/graphical Models and Image Processing \/computer Vision, Graphics, and Image Processing | 2005
Liwei Zhao; Norman I. Badler
This paper presents a neural computing model that can automatically extract motion qualities from live performance. The motion qualities are in terms of laban movement analysis (LMA) Effort factors. The model inputs both 3D motion capture and 2D video projections. The output is a classification of motion qualities that are detected in the input. The neural nets are trained with professional LMA notators to ensure valid analysis and have achieved an accuracy of about 90% in motion quality recognition. The combination of this system with the EMOTE motion synthesis system provides a capability for automating both observation and analysis processes, to produce natural gestures for embodied communicative agents.
Proceedings Computer Animation 2000 | 2000
Liwei Zhao; Monica Costa; Norman I. Badler
We describe a new paradigm in which a user can produce a wide range of expressive, natural-looking movements of animated characters by specifying their manners and attitudes with natural language verbs and adverbs. A natural language interpreter, a parameterized action representation (PAR), and an expressive motion engine (EMOTE) are designed to bridge the gap between natural language instructions issued by the user and expressive movements carried out by the animated characters. By allowing users to customize basic movements with natural language terms to support individualized expressions, our approach may eventually lead to the automatic generation of expressive movements from speech text, a storyboard script, or a behavioral simulation.
pacific conference on computer graphics and applications | 1998
Liwei Zhao; Norman I. Badler
Gesture and speech are two very important behaviors for virtual humans. They are not isolated from each other but generally employed simultaneously in the service of the same intention. An underlying PaT-Net parallel finite state machine may be used to coordinate them both. Gesture selection is not arbitrary. Typical movements correlated with specific textual elements are used to select and produce gesticulation online. This enhances the expressiveness of speaking virtual humans.
Embodied conversational agents | 2001
Norman I. Badler; Rama Bindiganavale; Jan M. Allbeck; William Schuler; Liwei Zhao; Martha Palmer
Proceedings of Computer Animation 2002 (CA 2002) | 2002
Norman I. Badler; Jan M. Allbeck; Liwei Zhao; Meeran Byun
Archive | 2001
Liwei Zhao; Norman I. Badler
computer graphics international | 2000
Norman I. Badler; Monica Costa; Liwei Zhao; Diane M. Chi