Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Irene Albrecht is active.

Publication


Featured researches published by Irene Albrecht.


symposium on computer animation | 2003

Construction and animation of anatomically based human hand models

Irene Albrecht; Jörg Haber; Hans-Peter Seidel

The human hand is a masterpiece of mechanical complexity, able to perform fine motor manipulations and powerful work alike. Designing an animatable human hand model that features the abilities of the archetype created by Nature requires a great deal of anatomical detail to be modeled. In this paper, we present a human hand model with underlying anatomical structure. Animation of the hand model is controlled by muscle contraction values. We employ a physically based hybrid muscle model to convert these contraction values into movement of skin and bones. Pseudo muscles directly control the rotation of bones based on anatomical data and mechanical laws, while geometric muscles deform the skin tissue using a mass-spring system. Thus, resulting animations automatically exhibit anatomically and physically correct finger movements and skin deformations. In addition, we present a deformation technique to create individual hand models from photographs. A radial basis warping function is set up from the correspondence of feature points and applied to the complete structure of the reference hand model, making the deformed hand model instantly animatable.


ACM Transactions on Graphics | 2008

Gesture modeling and animation based on a probabilistic re-creation of speaker style

Michael Neff; Michael Kipp; Irene Albrecht; Hans-Peter Seidel

Animated characters that move and gesticulate appropriately with spoken text are useful in a wide range of applications. Unfortunately, this class of movement is very difficult to generate, even more so when a unique, individual movement style is required. We present a system that, with a focus on arm gestures, is capable of producing full-body gesture animation for given input text in the style of a particular performer. Our process starts with video of a person whose gesturing style we wish to animate. A tool-assisted annotation process is performed on the video, from which a statistical model of the persons particular gesturing style is built. Using this model and input text tagged with theme, rheme and focus, our generation algorithm creates a gesture script. As opposed to isolated singleton gestures, our gesture script specifies a stream of continuous gestures coordinated with speech. This script is passed to an animation system, which enhances the gesture description with additional detail. It then generates either kinematic or physically simulated motion based on this description. The system is capable of generating gesture animations for novel text that are consistent with a given performers style, as was successfully validated in an empirical user study.


Virtual Reality | 2005

Mixed feelings: expression of non-basic emotions in a muscle-based talking head

Irene Albrecht; Marc Schröder; Jörg Haber; Hans-Peter Seidel

We present an algorithm for generating facial expressions for a continuum of pure and mixed emotions of varying intensity. Based on the observation that in natural interaction among humans, shades of emotion are much more frequently encountered than expressions of basic emotions, a method to generate more than Ekman’s six basic emotions (joy, anger, fear, sadness, disgust and surprise) is required. To this end, we have adapted the algorithm proposed by Tsapatsoulis et al. [1] to be applicable to a physics-based facial animation system and a single, integrated emotion model. A physics-based facial animation system was combined with an equally flexible and expressive text-to-speech synthesis system, based upon the same emotion model, to form a talking head capable of expressing non-basic emotions of varying intensities. With a variety of life-like intermediate facial expressions captured as snapshots from the system we demonstrate the appropriateness of our approach.


intelligent virtual agents | 2007

Towards Natural Gesture Synthesis: Evaluating Gesture Units in a Data-Driven Approach to Gesture Synthesis

Michael Kipp; Michael Neff; Kerstin H. Kipp; Irene Albrecht

Virtual humans still lack naturalness in their nonverbal behaviour. We present a data-driven solution that moves towards a more natural synthesis of hand and arm gestures by recreating gestural behaviour in the style of a human performer. Our algorithm exploits the concept of gesture units to make the produced gestures a continuous flow of movement. We empirically validated the use of gesture units in the generation and show that it causes the virtual human to be perceived as more natural.


language resources and evaluation | 2007

An annotation scheme for conversational gestures: how to economically capture timing and form

Michael Kipp; Michael Neff; Irene Albrecht

The empirical investigation of human gesture stands at the center of multiple research disciplines, and various gesture annotation schemes exist, with varying degrees of precision and required annotation effort. We present a gesture annotation scheme for the specific purpose of automatically generating and animating character-specific hand/arm gestures, but with potential general value. We focus on how to capture temporal structure and locational information with relatively little annotation effort. The scheme is evaluated in terms of how accurately it captures the original gestures by re-creating those gestures on an animated character using the annotated data. This paper presents our scheme in detail and compares it to other approaches.


international conference on computer graphics and interactive techniques | 2004

Pitching a baseball: tracking high-speed motion with multi-exposure images

Christian Theobalt; Irene Albrecht; Jörg Haber; Marcus A. Magnor; Hans-Peter Seidel

Athletes and coaches in most professional sports make use of high-tech equipment to analyze and, subsequently, improve the athletes performance. High-speed video cameras are employed, for instance, to record the swing of a golf club or a tennis racket, the movement of the feet while running, and the body motion in apparatus gymnastics. High-tech and high-speed equipment, however, usually implies high-cost as well. In this paper, we present a passive optical approach to capture high-speed motion using multi-exposure images obtained with low-cost commodity still cameras and a stroboscope. The recorded motion remains completely undisturbed by the motion capture process. We apply our approach to capture the motion of hand and ball for a variety of baseball pitches and present algorithms to automatically track the position, velocity, rotation axis, and spin of the ball along its trajectory. To demonstrate the validity of our setup and algorithms, we analyze the consistency of our measurements with a physically based model that predicts the trajectory of a spinning baseball. Our approach can be applied to capture a wide variety of other high-speed objects and activities such as golfing, bowling, or tennis for visualization as well as analysis purposes.


computer graphics international | 2002

Automatic Generation of Non-Verbal Facial Expressions from Speech

Irene Albrecht; Jörg Haber; Hans-Peter Seidel; John Vince; Rae A. Earnshaw

Speech synchronized facial animation that controls only the movement of the mouth is typically perceived as wooden and unnatural. We propose a method to generate additional facial expressions such as movement of the head, the eyes, and the eyebrows fully automatically from the input speech signal. This is achieved by extracting prosodic parameters such as pitch flow and power spectrum from the speech signal and using them to control facial animation parameters in accordance to results from paralinguistic research.


pacific conference on computer graphics and applications | 2002

May I talk to you? : -) - facial animation from text

Irene Albrecht; Jörg Haber; Kolja Kähler; Michael Schröder; Hans-Peter Seidel

We introduce a facial animation system that produces real-time animation sequences including speech synchronization and non-verbal speech-related facial expressions from plain text input. A state-of-the-art text-to-speech synthesis component performs linguistic analysis of the text input and creates a speech signal from phonetic and intonation information. The phonetic transcription is additionally used to drive a speech synchronization method for the physically based facial animation. Further high-level information from the linguistic analysis such as different types of accents or pauses as well as the type of the sentence is used to generate non-verbal speech-related facial expressions such as movement of head, eyes, and eyebrows or voluntary eye blinks. Moreover, emotions are translated into XML markup that triggers emotional facial expressions.


Computer Graphics Forum | 2007

Layered performance animation with correlation maps

Michael Neff; Irene Albrecht; Hans-Peter Seidel

Performance has a spontaneity and “aliveness” that can be difficult to capture in more methodical animation processes such as keyframing. Access to performance animation has traditionally been limited to either low degree of freedom characters or required expensive hardware. We present a performance‐based animation system for humanoid characters that requires no special hardware, relying only on mouse and keyboard input. We deal with the problem of controlling such a high degree of freedom model with low degree of freedom input through the use of correlation maps which employ 2D mouse input to modify a set of expressively relevant character parameters. Control can be continuously varied by rapidly switching between these maps. We present flexible techniques for varying and combining these maps and a simple process for defining them. The tool is highly configurable, presenting suitable defaults for novices and supporting a high degree of customization and control for experts. Animation can be recorded on a single pass, or multiple layers can be used to increase detail. Results from a user study indicate that novices are able to produce reasonable animations within their first hour of using the system. We also show more complicated results for walking and a standing character that gestures and dances.


international conference on computer graphics and interactive techniques | 2005

Creating face models from vague mental images

Irene Albrecht; Volker Blanz; Jörg Haber; Hans-Peter Seidel

We present a novel approach to create plausible 3D face models from vague recollections or incomplete descriptions, such as those given by eyewitnesses in police investigations. Our algorithm for navigating face space is based on a 3D morphable model. It exploits correlation between different facial features learned from a database, and uses a set of attribute constraints that restrict the face to a residual subspace. Faces are manipulated by in tuitive parameters or by importing facial elements from a database. To avoid exposure to confusingly different faces, each face in the database is mapped to the residual subspace defined by the constraints.

Collaboration


Dive into the Irene Albrecht's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael Neff

University of California

View shared research outputs
Top Co-Authors

Avatar

Michael Kipp

Augsburg University of Applied Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marcus A. Magnor

Braunschweig University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge