Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nancy S. Pollard is active.

Publication


Featured researches published by Nancy S. Pollard.


international conference on computer graphics and interactive techniques | 2002

Interactive control of avatars animated with human motion data

Jehee Lee; Jinxiang Chai; Paul S. A. Reitsma; Jessica K. Hodgins; Nancy S. Pollard

Real-time control of three-dimensional avatars is an important problem in the context of computer games and virtual environments. Avatar animation and control is difficult, however, because a large repertoire of avatar behaviors must be made available, and the user must be able to select from this set of behaviors, possibly with a low-dimensional input device. One appealing approach to obtaining a rich set of avatar behaviors is to collect an extended, unlabeled sequence of motion data appropriate to the application. In this paper, we show that such a motion database can be preprocessed for flexibility in behavior and efficient search and exploited for real-time avatar control. Flexibility is created by identifying plausible transitions between motion segments, and efficient search through the resulting graph structure is obtained through clustering. Three interface techniques are demonstrated for controlling avatar motion using this data structure: the user selects from a set of available choices, sketches a path through an environment, or acts out a desired motion in front of a video camera. We demonstrate the flexibility of the approach through four different applications and compare the avatar motion to directly recorded human motion.


international conference on computer graphics and interactive techniques | 2004

Synthesizing physically realistic human motion in low-dimensional, behavior-specific spaces

Alla Safonova; Jessica K. Hodgins; Nancy S. Pollard

Optimization is an appealing way to compute the motion of an animated character because it allows the user to specify the desired motion in a sparse, intuitive way. The difficulty of solving this problem for complex characters such as humans is due in part to the high dimensionality of the search space. The dimensionality is an artifact of the problem representation because most dynamic human behaviors are intrinsically low dimensional with, for example, legs and arms operating in a coordinated way. We describe a method that exploits this observation to create an optimization problem that is easier to solve. Our method utilizes an existing motion capture database to find a low-dimensional space that captures the properties of the desired behavior. We show that when the optimization problem is solved within this low-dimensional subspace, a sparse sketch can be used as an initial guess and full physics constraints can be enabled. We demonstrate the power of our approach with examples of forward, vertical, and turning jumps; with running and walking; and with several acrobatic flips.


international conference on robotics and automation | 2002

Adapting human motion for the control of a humanoid robot

Nancy S. Pollard; Jessica K. Hodgins; Marcia Riley; Christopher G. Atkeson

Using the pre-recorded human motion and trajectory tracking, we can control the motion of a humanoid robot for free-space, upper body gestures. However, the number of degrees of freedom, range of joint motion, and achievable joint velocities of todays humanoid robots are far more limited than those of the average human subject. In this paper, we explore a set of techniques for limiting human motion of upper body gestures to that achievable by a Sarcos humanoid robot located at ATR. We assess the quality of the results by comparing the motion of the human actor to that of the robot, both visually and quantitatively.


international conference on computer graphics and interactive techniques | 1997

Adapting simulated behaviors for new characters

Jessica K. Hodgins; Nancy S. Pollard

This paper describes an algorithm for automatically adapting existing simulated behaviors to new characters. Animating a new character is difficult because a control system tuned for one character will not, in general, work on a character with different limb lengths, masses, or moments of inertia. The algorithm presented here adapts the control system to a new character in two stages. First, the control system parameters are scaled based on the sizes, masses, and moments of inertia of the new and the original characters. Then a subset of the parameters is fine-tuned using a search process based on simulated annealing. To demonstrate the effectiveness of this approach, we animate the running motion of a woman, child, and imaginary character by modifying the control system for a man. We also animate the bicycling motion of a second imaginary character by modifying the control system for a man. We evaluate the results of this approach by comparing the motion of the simulated human runners with video of an actual child and with data for men, women, and children in the literature. In addition to adapting a control system for a new model, this approach can also be used to adapt the control system in an on-line fashion to produce a physically realistic metamorphosis from the original to the new model while the morphing character is performing the behavior. We demonstrate this on-line adaptation with a morph from a man to a woman over a period of twenty seconds. CR Categories: I.3.7 [Computer Graphics]: Three Dimensional Graphics and Realism: Animation—; G.1.6 [Numerical Analysis]: Optimization—; I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling—Physically-Based Modeling


international conference on computer graphics and interactive techniques | 2007

Responsive characters from motion fragments

James McCann; Nancy S. Pollard

In game environments, animated character motion must rapidly adapt to changes in player input - for example, if a directional signal from the players gamepad is not incorporated into the characters trajectory immediately, the character may blithely run off a ledge. Traditional schemes for data-driven character animation lack the split-second reactivity required for this direct control; while they can be made to work, motion artifacts will result. We describe an on-line character animation controller that assembles a motion stream from short motion fragments, choosing each fragment based on current player input and the previous fragment. By adding a simple model of player behavior we are able to improve an existing reinforcement learning method for precalculating good fragment choices. We demonstrate the efficacy of our model by comparing the animation selected by our new controller to that selected by existing methods and to the optimal selection, given knowledge of the entire path. This comparison is performed over real-world data collected from a game prototype. Finally, we provide results indicating that occasional low-quality transitions between motion segments are crucial to high-quality on-line motion generation; this is an important result for others crafting animation systems for directly-controlled characters, as it argues against the common practice of transition thresholding.


symposium on computer animation | 2005

Physically based grasping control from example

Nancy S. Pollard; Victor B. Zordan

Animated human characters in everyday scenarios must interact with the environment using their hands. Captured human motion can provide a database of realistic examples. However, examples involving contact are difficult to edit and retarget; realism can suffer when a grasp does not appear secure or when an apparent impact does not disturb the hand or the object. Physically based simulations can preserve plausibility through simulating interaction forces. However, such physical models must be driven by a controller, and creating effective controllers for new motion tasks remains a challenge. In this paper, we present a controller for physically based grasping that draws from motion capture data. Our controller explicitly includes passive and active components to uphold compliant yet controllable motion, and it adds compensation for movement of the arm and for gravity to make the behavior of passive and active components less dependent on the dynamics of arm motion. Given a set of motion capture grasp examples, our system solves for all but a small set of parameters for this controller automatically. We demonstrate results for tasks including grasping and two-hand interaction and show that a controller derived from a single motion capture example can be used to form grasps of different object geometries.


international conference on computer graphics and interactive techniques | 2008

Real-time gradient-domain painting

James McCann; Nancy S. Pollard

We present an image editing program which allows artists to paint in the gradient domain with real-time feedback on megapixel-sized images. Along with a pedestrian, though powerful, gradient-painting brush and gradient-clone tool, we introduce an edge brush designed for edge selection and replay. These brushes, coupled with special blending modes, allow users to accomplish global lighting and contrast adjustments using only local image manipulations --- e.g. strengthening a given edge or removing a shadow boundary. Such operations would be tedious in a conventional intensity-based paint program and hard for users to get right in the gradient domain without real-time feedback. The core of our paint program is a simple-to-implement GPU multigrid method which allows integration of megapixel-sized full-color gradient fields at over 20 frames per second on modest hardware. By way of evaluation, we present example images produced with our program and characterize the iteration time and convergence rate of our integration method.


The International Journal of Robotics Research | 2004

Closure and Quality Equivalence for Efficient Synthesis of Grasps from Examples

Nancy S. Pollard

One goal of grasp selection for robotics is to choose contact points that guarantee properties such as force- or form-closure. Many efficient algorithms have been developed to address this problem, but most of these algorithms focus on grasps having a minimal number of contact points. Increasing the number of contacts can dramatically improve the quality and flexibility of grasps that are constructed. However, computation time becomes a problem, as grasp synthesis algorithms that can be generalized to an arbitrary number of contacts typically require time exponential in the number of contacts. This paper presents an efficient algorithm for synthesis of many-contact grasps. The key idea is to geometrically construct families of grasps around a single example such that all grasps within a family meet user-specified design goals. We show that our construction technique can be used to form force-closure grasps, partial force-closure grasps, and grasps above a quality threshold. Our approach requires time polynomial in the number of contacts, making it feasible to handle grasps with relatively large numbers of contacts. Results are shown for three-dimensional grasps with friction having five to twelve contacts and specialized for a variety of tasks. We have used this approach to design grasps for a robot hand and quasi-static manipulation plans for a humanoid robot.


IEEE Transactions on Visualization and Computer Graphics | 2007

Data-Driven Grasp Synthesis Using Shape Matching and Task-Based Pruning

Li Ying; Jiaxin L. Fu; Nancy S. Pollard

Human grasps, especially whole-hand grasps, are difficult to animate because of the high number of degrees of freedom of the hand and the need for the hand to conform naturally to the object surface. Captured human motion data provides us with a rich source of examples of natural grasps. However, for each new object, we are faced with the problem of selecting the best grasp from the database and adapting it to that object. This paper presents a data-driven approach to grasp synthesis. We begin with a database of captured human grasps. To identify candidate grasps for a new object, we introduce a novel shape matching algorithm that matches hand shape to object shape by identifying collections of features having similar relative placements and surface normals. This step returns many grasp candidates, which are clustered and pruned by choosing the grasp best suited for the intended task. For pruning undesirable grasps, we develop an anatomically-based grasp quality measure specific to the human hand. Examples of grasp synthesis are shown for a variety of objects not present in the original database. This algorithm should be useful both as an animator tool for posing the hand and for automatic grasp synthesis in virtual environments.


knowledge discovery and data mining | 2009

DynaMMo: mining and summarization of coevolving sequences with missing values

Lei Li; James McCann; Nancy S. Pollard; Christos Faloutsos

Given multiple time sequences with missing values, we propose DynaMMo which summarizes, compresses, and finds latent variables. The idea is to discover hidden variables and learn their dynamics, making our algorithm able to function even when there are missing values. We performed experiments on both real and synthetic datasets spanning several megabytes, including motion capture sequences and chlorine levels in drinking water. We show that our proposed DynaMMo method (a) can successfully learn the latent variables and their evolution; (b) can provide high compression for little loss of reconstruction accuracy; (c) can extract compact but powerful features for segmentation, interpretation, and forecasting; (d) has complexity linear on the duration of sequences.

Collaboration


Dive into the Nancy S. Pollard's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lillian Y. Chang

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael C. Koval

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Junggon Kim

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

J. Andrew Bagnell

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge