Michael Kipp
Augsburg University of Applied Sciences
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Michael Kipp.
ACM Transactions on Graphics | 2008
Michael Neff; Michael Kipp; Irene Albrecht; Hans-Peter Seidel
Animated characters that move and gesticulate appropriately with spoken text are useful in a wide range of applications. Unfortunately, this class of movement is very difficult to generate, even more so when a unique, individual movement style is required. We present a system that, with a focus on arm gestures, is capable of producing full-body gesture animation for given input text in the style of a particular performer. Our process starts with video of a person whose gesturing style we wish to animate. A tool-assisted annotation process is performed on the video, from which a statistical model of the persons particular gesturing style is built. Using this model and input text tagged with theme, rheme and focus, our generation algorithm creates a gesture script. As opposed to isolated singleton gestures, our gesture script specifies a stream of continuous gestures coordinated with speech. This script is passed to an animation system, which enhances the gesture description with additional detail. It then generates either kinematic or physically simulated motion based on this description. The system is capable of generating gesture animations for novel text that are consistent with a given performers style, as was successfully validated in an empirical user study.
intelligent virtual agents | 2007
Michael Kipp; Michael Neff; Kerstin H. Kipp; Irene Albrecht
Virtual humans still lack naturalness in their nonverbal behaviour. We present a data-driven solution that moves towards a more natural synthesis of hand and arm gestures by recreating gestural behaviour in the style of a human performer. Our algorithm exploits the concept of gesture units to make the produced gestures a continuous flow of movement. We empirically validated the use of gesture units in the generation and show that it causes the virtual human to be perceived as more natural.
language resources and evaluation | 2007
Michael Kipp; Michael Neff; Irene Albrecht
The empirical investigation of human gesture stands at the center of multiple research disciplines, and various gesture annotation schemes exist, with varying degrees of precision and required annotation effort. We present a gesture annotation scheme for the specific purpose of automatically generating and animating character-specific hand/arm gestures, but with potential general value. We focus on how to capture temporal structure and locational information with relatively little annotation effort. The scheme is evaluated in terms of how accurately it captures the original gestures by re-creating those gestures on an animated character using the annotated data. This paper presents our scheme in detail and compares it to other approaches.
affective computing and intelligent interaction | 2009
Michael Kipp; Jean-Claude Martin
The question how exactly gesture and emotion are interrelated is still sparsely covered in research, yet highly relevant for building affective artificial agents. In our study, we investigate how basic gestural form features (handedness, hand shape, palm orientation and motion direction) are related to components of emotion. We argue that material produced by actors in filmed theater stagings are particularly well suited for such analyses. Our results indicate that there may be a universal association of gesture handedness with the emotional dimensions of pleasure and arousal. We discuss this and more specific findings, and conclude with possible implications and applications of our study.
international conference on spoken language processing | 1996
Norbert Reithinger; Ralf Engel; Michael Kipp; Martin Klesen
Presents the application of statistical language modeling methods for the prediction of the next dialogue act. This prediction is used by different modules of the speech-to-speech translation system VERBMOBIL. The statistical approach uses deleted interpolation of n-gram frequencies as its basis and determines the interpolation weights by a modified version of the standard optimization algorithm. Additionally, we present and evaluate different approaches to improve the prediction process, e.g. including knowledge from a dialogue grammar. Evaluation shows that including the speaker information and mirroring the data delivers the best results.
affective computing and intelligent interaction | 2009
Alexis Heloir; Michael Kipp
Embodied agents can be powerful interface devices and versatile research tools for the study of emotion, gesture, facial expression etc. However, they require high effort and expertise for their creation, assembly and animation control. Therefore, open animation engines and high-level control languages are required to make embodied agents accessible to researchers and developers. In this demo paper, we present such an engine called EMBR (Embodied Agents Behavior Realizer) and its control language EMBRScript. EMBR produces real time multimodal animations specified in the EMBRScript language. EMBR is robust and reactive enouth to cope with interruptive events from the user. Finally, EMBR and all its components, from assets creation to rendering will be freely available.
intelligent virtual agents | 2011
Michael Kipp; Alexis Heloir; Quan Nguyen
Many deaf people have significant reading problems. Written content, e.g. on internet pages, is therefore not fully accessible for them. Embodied agents have the potential to communicate in the native language of this cultural group: sign language. However, state-of-the-art systems have limited comprehensibility and standard evaluation methods are missing. In this paper, we present methods and discuss challenges for the creation and evaluation of a signing avatar. We extended the existing EMBR character animation system with prerequisite functionality, created a gloss-based animation tool and developed a cyclic content creation workflow with the help of two deaf sign language experts. For evaluation, we introduce delta testing, a novel way of assessing comprehensibility by comparing avatars with human signers. While our system reached state-of-the-art comprehensibility in a short development time we argue that future research needs to focus on nonmanual aspects and prosody to reach the comprehensibility levels of human signers.
intelligent virtual agents | 2009
Alexis Heloir; Michael Kipp
Embodied agents can be powerful interface devices and versatile research tools for the study of emotion, gesture, facial expression etc. However, they require high effort and expertise for their creation, assembly and animation control. Therefore, open animation engines and high-level control languages are required to make embodied agents accessible to researchers and developers. In this demo paper, we present such an engine called EMBR (Embodied Agents Behavior Realizer) and its control language EMBRScript. EMBR produces real time multimodal animations specified in the EMBRScript language. EMBR is robust and reactive enouth to cope with interruptive events from the user. Finally, EMBR and all its components, from assets creation to rendering will be freely available.
intelligent virtual agents | 2008
Michael Kipp; Patrick Gebhard
We present IGaze, a semi-immersive human-avatar interaction system. Using head tracking and an illusionistic 3D effect we let users interact with a talking avatar in an application interview scenario. The avatar features reactive gaze behavior that adapts to the user position according to exchangeable gaze strategies. In user studies we showed that two gaze strategies successfully convey the intended impression of dominance/submission and that the 3D effect was positively received. We argue that IGaze is a suitable setup for exploring reactive nonverbal behavior synthesis in human-avatar interactions.
intelligent virtual agents | 2008
Patrick Gebhard; Marc Schröder; Marcela Charfuelan; Christoph Endres; Michael Kipp; Sathish Pammi; Martin Rumpler; Oytun Türk
In this paper we present two virtual characters in an interactive poker game using RFID-tagged poker cards for the interaction. To support the game creation process, we have combined models, methods, and technology that are currently investigated in the ECA research field in a unique way. A powerful and easy-to-use multimodal dialog authoring tool is used for the modeling of game content and interaction. The poker characters rely on a sophisticated model of affect and a state-of-the art speech synthesizer. During the game, the characters show a consistent expressive behavior that reflects the individually simulated affect in speech and animations. As a result, users are provided with an engaging interactive poker experience.