Jessica K. Hodgins
Carnegie Mellon University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jessica K. Hodgins.
international conference on computer graphics and interactive techniques | 2002
Jehee Lee; Jinxiang Chai; Paul S. A. Reitsma; Jessica K. Hodgins; Nancy S. Pollard
Real-time control of three-dimensional avatars is an important problem in the context of computer games and virtual environments. Avatar animation and control is difficult, however, because a large repertoire of avatar behaviors must be made available, and the user must be able to select from this set of behaviors, possibly with a low-dimensional input device. One appealing approach to obtaining a rich set of avatar behaviors is to collect an extended, unlabeled sequence of motion data appropriate to the application. In this paper, we show that such a motion database can be preprocessed for flexibility in behavior and efficient search and exploited for real-time avatar control. Flexibility is created by identifying plausible transitions between motion segments, and efficient search through the resulting graph structure is obtained through clustering. Three interface techniques are demonstrated for controlling avatar motion using this data structure: the user selects from a set of available choices, sketches a path through an environment, or acts out a desired motion in front of a video camera. We demonstrate the flexibility of the approach through four different applications and compare the avatar motion to directly recorded human motion.
international conference on computer graphics and interactive techniques | 1995
Jessica K. Hodgins; Wayne L. Wooten; David C. Brogan; James F. O'Brien
This paper describes algorithms for the animation of male and female models performing three dynamic athletic behaviors: running, bicycling, and vaulting. We animate these behaviors using control algorithms that cause a physically realistic model to perform the desired maneuver. For example, control algorithms allow the simulated humans to maintain balance while moving their arms, to run or bicycle at a variety of speeds, and to perform two vaults. For each simulation, we compare the computed motion to that of humans performing similar maneuvers. We perform the comparison both qualitatively through real and simulated video images and quantitatively through simulated and biomechanical data.
international conference on computer graphics and interactive techniques | 1997
Joe Marks; Brad Andalman; Paul A. Beardsley; William T. Freeman; Jessica K. Hodgins; T. Kang; Brian Mirtich; Hanspeter Pfister; Wheeler Ruml; Kathy Ryall; Joshua E. Seims; Stuart M. Shieber
Image rendering maps scene parameters to output pixel values; animation maps motion-control parameters to trajectory values. Because these mapping functions are usually multidimensional, nonlinear, and discontinuous, finding input parameters that yield desirable output values is often a painful process of manual tweaking. Interactive evolution and inverse design are two general methodologies for computer-assisted parameter setting in which the computer plays a prominent role. In this paper we present another such methodology. Design GalleryTM (DG) interfaces present the user with the broadest selection, automatically generated and organized, of perceptually different graphics or animations that can be produced by varying a given input-parameter vector. The principal technical challenges posed by the DG approach are dispersion, finding a set of input-parameter vectors that optimally disperses the resulting output-value vectors, and arrangement, organizing the resulting graphics for easy and intuitive browsing by the user. We describe the use of DG interfaces for several parameter-setting problems: light selection and placement for image rendering, both standard and image-based; opacity and color transfer-function specification for volume rendering; and motion control for particle-system and articulated-figure animation. CR Categories: I.2.6 [Artificial Intelligence]: Problem Solving, Control Methods and Search—heuristic methods; I.3.6 [Computer Graphics]: Methodology and Techniques—interaction techniques; I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism.
international conference on computer graphics and interactive techniques | 2002
James F. O'Brien; Adam W. Bargteil; Jessica K. Hodgins
This video demonstrates a method for realistically animating ductile fracture in common solid materials such as plastics and metals. The effects that characterize ductile fracture occur due to interactions between yielding plastic and the fracture process. By modeling this interaction, this ductile fracture method can generate realistic motion for a much wider range of materials than could be realized with a purely brittle model.
international conference on computer graphics and interactive techniques | 2004
Alla Safonova; Jessica K. Hodgins; Nancy S. Pollard
Optimization is an appealing way to compute the motion of an animated character because it allows the user to specify the desired motion in a sparse, intuitive way. The difficulty of solving this problem for complex characters such as humans is due in part to the high dimensionality of the search space. The dimensionality is an artifact of the problem representation because most dynamic human behaviors are intrinsically low dimensional with, for example, legs and arms operating in a coordinated way. We describe a method that exploits this observation to create an optimization problem that is easier to solve. Our method utilizes an existing motion capture database to find a low-dimensional space that captures the properties of the desired behavior. We show that when the optimization problem is solved within this low-dimensional subspace, a sparse sketch can be used as an initial guess and full physics constraints can be enabled. We demonstrate the power of our approach with examples of forward, vertical, and turning jumps; with running and walking; and with several acrobatic flips.
international conference on robotics and automation | 2005
Joel E. Chestnutt; Manfred Lau; German K. M. Cheung; James J. Kuffner; Jessica K. Hodgins; Takeo Kanade
Despite the recent achievements in stable dynamic walking for many humanoid robots, relatively little navigation autonomy has been achieved. In particular, the ability to autonomously select foot placement positions to avoid obstacles while walking is an important step towards improved navigation autonomy for humanoids. We present a footstep planner for the Honda ASIMO humanoid robot that plans a sequence of footstep positions to navigate toward a goal location while avoiding obstacles. The possible future foot placement positions are dependent on the current state of the robot. Using a finite set of state-dependent actions, we use an A* search to compute optimal sequences of footstep locations up to a time-limited planning horizon. We present experimental results demonstrating the robot navigating through both static and dynamic known environments that include obstacles moving on predictable trajectories.
international conference on computer graphics and interactive techniques | 2005
Jinxiang Chai; Jessica K. Hodgins
This paper introduces an approach to performance animation that employs video cameras and a small set of retro-reflective markers to create a low-cost, easy-to-use system that might someday be practical for home use. The low-dimensional control signals from the users performance are supplemented by a database of pre-recorded human motion. At run time, the system automatically learns a series of local models from a set of motion capture examples that are a close match to the marker locations captured by the cameras. These local models are then used to reconstruct the motion of the user as a full-body animation. We demonstrate the power of this approach with real-time control of six different behaviors using two video cameras and a small set of retro-reflective markers. We compare the resulting animation to animation from commercial motion capture equipment with a full set of markers.
international conference on robotics and automation | 2002
Nancy S. Pollard; Jessica K. Hodgins; Marcia Riley; Christopher G. Atkeson
Using the pre-recorded human motion and trajectory tracking, we can control the motion of a humanoid robot for free-space, upper body gestures. However, the number of degrees of freedom, range of joint motion, and achievable joint velocities of todays humanoid robots are far more limited than those of the average human subject. In this paper, we explore a set of techniques for limiting human motion of upper body gestures to that achievable by a Sarcos humanoid robot located at ATR. We assess the quality of the results by comparing the motion of the human actor to that of the robot, both visually and quantitatively.
symposium on computer animation | 2002
Victor B. Zordan; Jessica K. Hodgins
Controllable, reactive human motion is essential in many video games and training environments. Characters in these applications often perform tasks based on modified motion data, but response to unpredicted events is also important in order to maintain realism. We approach the problem of motion synthesis for interactive, humanlike characters by combining dynamic simulation and human motion capture data. Our control systems use trajectory tracking to follow motion capture data and a balance controller to keep the character upright while modifying sequences from a small motion library to accomplish specified tasks, such as throwing punches or swinging a racket. The system reacts to forces computed from a physical collision model by changing stiffness and damping terms. The freestanding, simulated humans respond automatically to impacts and smoothly return to tracking. We compare the resulting motion with video and recorded human data.
international conference on computer graphics and interactive techniques | 2004
Katsu Yamane; James J. Kuffner; Jessica K. Hodgins
Even such simple tasks as placing a box on a shelf are difficult to animate, because the animator must carefully position the character to satisfy geometric and balance constraints while creating motion to perform the task with a natural-looking style. In this paper, we explore an approach for animating characters manipulating objects that combines the power of path planning with the domain knowledge inherent in data-driven, constraint-based inverse kinematics. A path planner is used to find a motion for the object such that the corresponding poses of the character satisfy geometric, kinematic, and posture constraints. The inverse kinematics computation of the characters pose resolves redundancy by biasing the solution toward natural-looking poses extracted from a database of captured motions. Having this database greatly helps to increase the quality of the output motion. The computed path is converted to a motion trajectory using a model of the velocity profile. We demonstrate the effectiveness of the algorithm by generating animations across a wide range of scenarios that cover variations in the geometric, kinematic, and dynamic models of the character, the manipulated object, and obstacles in the scene.