Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael Neff is active.

Publication


Featured researches published by Michael Neff.


ACM Transactions on Graphics | 2008

Gesture modeling and animation based on a probabilistic re-creation of speaker style

Michael Neff; Michael Kipp; Irene Albrecht; Hans-Peter Seidel

Animated characters that move and gesticulate appropriately with spoken text are useful in a wide range of applications. Unfortunately, this class of movement is very difficult to generate, even more so when a unique, individual movement style is required. We present a system that, with a focus on arm gestures, is capable of producing full-body gesture animation for given input text in the style of a particular performer. Our process starts with video of a person whose gesturing style we wish to animate. A tool-assisted annotation process is performed on the video, from which a statistical model of the persons particular gesturing style is built. Using this model and input text tagged with theme, rheme and focus, our generation algorithm creates a gesture script. As opposed to isolated singleton gestures, our gesture script specifies a stream of continuous gestures coordinated with speech. This script is passed to an animation system, which enhances the gesture description with additional detail. It then generates either kinematic or physically simulated motion based on this description. The system is capable of generating gesture animations for novel text that are consistent with a given performers style, as was successfully validated in an empirical user study.


intelligent virtual agents | 2007

Towards Natural Gesture Synthesis: Evaluating Gesture Units in a Data-Driven Approach to Gesture Synthesis

Michael Kipp; Michael Neff; Kerstin H. Kipp; Irene Albrecht

Virtual humans still lack naturalness in their nonverbal behaviour. We present a data-driven solution that moves towards a more natural synthesis of hand and arm gestures by recreating gestural behaviour in the style of a human performer. Our algorithm exploits the concept of gesture units to make the produced gestures a continuous flow of movement. We empirically validated the use of gesture units in the generation and show that it causes the virtual human to be perceived as more natural.


language resources and evaluation | 2007

An annotation scheme for conversational gestures: how to economically capture timing and form

Michael Kipp; Michael Neff; Irene Albrecht

The empirical investigation of human gesture stands at the center of multiple research disciplines, and various gesture annotation schemes exist, with varying degrees of precision and required annotation effort. We present a gesture annotation scheme for the specific purpose of automatically generating and animating character-specific hand/arm gestures, but with potential general value. We focus on how to capture temporal structure and locational information with relatively little annotation effort. The scheme is evaluated in terms of how accurately it captures the original gestures by re-creating those gestures on an animated character using the annotated data. This paper presents our scheme in detail and compares it to other approaches.


symposium on computer animation | 2005

AER: aesthetic exploration and refinement for expressive character animation

Michael Neff; Eugene Fiume

Our progress in the problem of making animated characters move expressively has been slow, and it persists in being among the most challenging in computer graphics. Simply attending to the low-level motion control problem, particularly for physically based models, is very difficult. Providing an animator with the tools to imbue character motion with broad expressive qualities is even more ambitious, but it is clear it is a goal to which we must aspire. Part of the problem is simply finding the right language in which to express qualities of motion. Another important issue is that expressive animation often involves many disparate parts of the body, which thwarts bottom-up controller synthesis. We demonstrate progress in this direction through the specification of directed, expressive animation over a limited range of standing movements. A key contribution is that through the use of high-level concepts such as character sketches, actions and properties, which impose different modalities of character behaviour, we are able to create many different animated interpretations of the same script. These tools support both rapid exploration of the aesthetic space and detailed refinement. Basic character actions and properties are distilled from an extensive search in the performing arts literature. We demonstrate how all high-level constructions for expressive animation can be given a precise semantics that translate into a low-level motion specification that is then simulated either physically or kinematically. Our language and system can act as a bridge across artistic and technical communities to resolve ambiguities regarding the language of motion. We demonstrate our results through an implementation and various examples.


symposium on computer animation | 2004

Methods for exploring expressive stance

Michael Neff; Eugene Fiume

The postures a character adopts over time are a key expressive aspect of her movement. While IK tools help a character achieve positioning constraints, there are few tools that help an animator with the expressive aspects of a characters poses. Three aspects are combined in good pose design: achieving a set of world space constraints, finding a body shape that reflects the characters inner state and personality, and making adjustments to balance that act to strengthen the pose and also maintain realism. This is routinely done in the performing arts, but is uncommon in computer graphics. Our system combines all three components within a single body shape solver. The system combines feedback based balance control with a hybrid IK system that utilizes optimization and analytic IK components. The IK system has been carefully designed to allow direct control over various aesthetically important aspects of body shape, such as the type of curve in the spine and the relationship between the collar bones. The system allows for both low-level control and for higher level shape sets to be defined and used. Shape sets allow an animator to use a single scalar to vary a characters pose within a specified shape class, providing an intuitive parameterization of a posture. Changing shape sets allows an animator to quickly experiment with different posture options for a movement sequence, supporting rapid exploration of the aesthetic space.


symposium on computer animation | 2003

Aesthetic edits for character animation

Michael Neff; Eugene Fiume

The utility of an interactive tool can be measured by how pervasively it is embedded into a users workflow. Tools for artists additionally must provide an appropriate level of control over expressive aspects of their work while suppressing unwanted intrusions due to details that are, for the moment, unnecessary. Our focus is on tools that target editing the expressive aspects of character motion. These tools allow animators to work in a way that is more expedient than modifying low-level details, and offers finer control than high level, directorial approaches. To illustrate this approach, we present three such tools, one for varying timing (succession), and two for varying motion shape (amplitude and extent). Succession editing allows the animator to vary the activation times of the joints in the motion. Amplitude editing allows the animator to vary the joint ranges covered during a motion. Extent editing allows an animator to vary how fully a character occupies space during a movement -- using space freely or keeping the movement close to his body. We argue that such editing tools can be fully embedded in the workflow of character animators. We present a general animation system in which these and other edits can be defined programmatically. Working in a general pose or keyframe framework, either kinematic or dynamic motion can be generated. This system is extensible to include an arbitrary set of movement edits.


symposium on computer animation | 2009

Interactive editing of motion style using drives and correlations

Michael Neff; Yejin Kim

Animation data, from motion capture or other sources, is becoming increasingly available and provides high quality motion, but is difficult to customize for the needs of a particular application. This is especially true when stylistic changes are needed, for example, to reflect a characters changing mood, differentiate one character from another or meet the precise desires of an animator. We introduce a system for editing animation data that is particularly well suited to making stylistic changes. Our approach transforms the joint angle representation of animation data into a set of pose parameters more suitable for editing. These motion drives include position data for the wrists, ankles and center of mass, as well as the rotation of the pelvis. We also extract correlations between drives and body movement, specifically between wrist position and the torso angles. The system solves for the pose at each frame based on the current values of these drives and correlations using an efficient set of inverse kinematics and balance algorithms. An animator can interactively edit the motion by performing linear operations on the motion drives or extracted correlations, or by layering additional correlations. We demonstrate the effectiveness of the approach with various examples of gesture and locomotion.


intelligent virtual agents | 2011

Don't scratch! self-adaptors reflect emotional stability

Michael Neff; Nicholas Toothman; Robeson Bowmani; Jean E. Fox Tree; Marilyn A. Walker

A key goal in agent research is to be able to generate multimodal characters that can reflect a particular personality. The Big Five model of personality provides a framework for codifying personality variation. This paper reviews findings in the psychology literature to understand how the Big Five trait of emotional stability correlates with changes in verbal and nonverbal behavior. Agent behavior was modified based on these findings and a perceptual study was completed to determine if these changes lead to the controllable perception of emotional stability in virtual agents. The results reveal how language variation and the use of self-adaptors can be used to increase or decrease the perceived emotional stability of an agent. Self-adaptors are movements that often involve self-touch, such as scratching or bending ones fingers backwards in an unnatural brace. These results provide guidance on how agent designers can create particular characters, including indicating that for particular personality types, it is important to also produce typically non-communicative gestural behavior, such as the self-adaptors studied.


Computer Graphics Forum | 2015

State of the Art in Hand and Finger Modeling and Animation

Nkenge Wheatland; Yingying Wang; Huaguang Song; Michael Neff; Victor B. Zordan; Sophie Jörg

The human hand is a complex biological system able to perform numerous tasks with impressive accuracy and dexterity. Gestures furthermore play an important role in our daily interactions, and humans are particularly skilled at perceiving and interpreting detailed signals in communications. Creating believable hand motions for virtual characters is an important and challenging task. Many new methods have been proposed in the Computer Graphics community within the last years, and significant progress has been made towards creating convincing, detailed hand and finger motions. This state of the art report presents a review of the research in the area of hand and finger modeling and animation. Starting with the biological structure of the hand and its implications for how the hand moves, we discuss current methods in motion capturing hands, data‐driven and physics‐based algorithms to synthesize their motions, and techniques to make the appearance of the hand model surface more realistic. We then focus on areas in which detailed hand motions are crucial such as manipulation and communication. Our report concludes by describing emerging trends and applications for virtual hand animation.


Computer Graphics Forum | 2007

Layered performance animation with correlation maps

Michael Neff; Irene Albrecht; Hans-Peter Seidel

Performance has a spontaneity and “aliveness” that can be difficult to capture in more methodical animation processes such as keyframing. Access to performance animation has traditionally been limited to either low degree of freedom characters or required expensive hardware. We present a performance‐based animation system for humanoid characters that requires no special hardware, relying only on mouse and keyboard input. We deal with the problem of controlling such a high degree of freedom model with low degree of freedom input through the use of correlation maps which employ 2D mouse input to modify a set of expressively relevant character parameters. Control can be continuously varied by rapidly switching between these maps. We present flexible techniques for varying and combining these maps and a simple process for defining them. The tool is highly configurable, presenting suitable defaults for novices and supporting a high degree of customization and control for experts. Animation can be recorded on a single pass, or multiple layers can be used to increase detail. Results from a user study indicate that novices are able to produce reasonable animations within their first hour of using the system. We also show more complicated results for walking and a standing character that gestures and dances.

Collaboration


Dive into the Michael Neff's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yingying Wang

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jackson Tolins

University of California

View shared research outputs
Top Co-Authors

Avatar

Kris Liu

University of California

View shared research outputs
Top Co-Authors

Avatar

Yejin Kim

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael Kipp

Augsburg University of Applied Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge