Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David B. Grimes is active.

Publication


Featured researches published by David B. Grimes.


human factors in computing systems | 2008

Feasibility and pragmatics of classifying working memory load with an electroencephalograph

David B. Grimes; Desney S. Tan; Scott E. Hudson; Pradeep Shenoy; Rajesh P. N. Rao

A reliable and unobtrusive measurement of working memory load could be used to evaluate the efficacy of interfaces and to provide real-time user-state information to adaptive systems. In this paper, we describe an experiment we con-ducted to explore some of the issues around using an elec-troencephalograph (EEG) for classifying working memory load. Within this experiment, we present our classification methodology, including a novel feature selection scheme that seems to alleviate the need for complex drift modeling and artifact rejection. We demonstrate classification accuracies of up to 99% for 2 memory load levels and up to 88% for 4 levels. We also present results suggesting that we can do this with shorter windows, much less training data, and a smaller number of EEG channels, than reported previously. Finally, we show results suggesting that the models we construct transfer across variants of the task, implying some level of generality. We believe these findings extend prior work and bring us a step closer to the use of such technologies in HCI research.


Neural Computation | 2005

Bilinear Sparse Coding for Invariant Vision

David B. Grimes; Rajesh P. N. Rao

Recent algorithms for sparse coding and independent component analysis (ICA) have demonstrated how localized features can be learned from natural images. However, these approaches do not take image transformations into account. We describe an unsupervised algorithm for learning both localized features and their transformations directly from images using a sparse bilinear generative model. We show that from an arbitrary set of natural images, the algorithm produces oriented basis filters that can simultaneously represent features in an image and their transformations. The learned generative model can be used to translate features to different locations, thereby reducing the need to learn the same feature at multiple locations, a limitation of previous approaches to sparse coding and ICA. Our results suggest that by explicitly modeling the interaction between local image features and their transformations, the sparse bilinear approach can provide a basis for achieving transformation-invariant vision.


robotics: science and systems | 2006

Dynamic Imitation in a Humanoid Robot through Nonparametric Probabilistic Inference.

David B. Grimes; Rawichote Chalodhorn; Rajesh P. N. Rao

We tackle the problem of learning imitative whole- body motions in a humanoid robot using probabilistic inference in Bayesian networks. Our inference-based approach affords a straightforward method to exploit rich yet uncertain prior information obtained from human motion capture data. Dynamic imitation implies that the robot must interact with its environ- ment and account for forces such as gravity and inertia during imitation. Rather than explicitly modeling these forces and the body of the humanoid as in traditional approaches, we show that stable imitative motion can be achieved by learning a sensor- based representation of dynamic balance. Bayesian networks provide a sound theoretical framework for combining prior kinematic information (from observing a human demonstrator) with prior dynamic information (based on previous experience) to model and subsequently infer motions which, with high probability, will be dynamically stable. By posing the problem as one of inference in a Bayesian network, we show that methods developed for approximate inference can be leveraged to efficiently perform inference of actions. Additionally, by using nonparametric inference and a nonparametric (Gaussian process) forward model, our approach does not make any strong assump- tions about the physical environment or the mass and inertial properties of the humanoid robot. We propose an iterative, probabilistically constrained algorithm for exploring the space of motor commands and show that the algorithm can quickly discover dynamically stable actions for whole-body imitation of human motion. Experimental results based on simulation and subsequent execution by a HOAP-2 humanoid robot demonstrate that our algorithm is able to imitate a human performing actions such as squatting and a one-legged balance. I. INTRODUCTION Imitation learning presents a promising approach to the problem of enabling complex behavior learning in humanoid robots. Learning through imitation provides the robot with strong prior information by observing a skilled instructor (of- ten assumed to be a human demonstrator). This paper presents a model for exploiting this prior information about whole-body motions gathered from observing a human performance of the motion. Although the observation of the teacher is informative, there is a high degree of uncertainty in how the robot can and should imitate. Our model accounts for some of these sources of uncertainty including: noisy and missing kinematic estimates of the teacher, mapping ambiguities between the human and robot kinematic spaces, and lastly, the large m0 a1


Neural Networks | 2006

2006 Special issue: A probabilistic model of gaze imitation and shared attention

Matthew W. Hoffman; David B. Grimes; Aaron P. Shon; Rajesh P. N. Rao

An important component of language acquisition and cognitive learning is gaze imitation. Infants as young as one year of age can follow the gaze of an adult to determine the object the adult is focusing on. The ability to follow gaze is a precursor to shared attention, wherein two or more agents simultaneously focus their attention on a single object in the environment. Shared attention is a necessary skill for many complex, natural forms of learning, including learning based on imitation. This paper presents a probabilistic model of gaze imitation and shared attention that is inspired by Meltzoff and Moores AIM model for imitation in infants. Our model combines a probabilistic algorithm for estimating gaze vectors with bottom-up saliency maps of visual scenes to produce maximum a posteriori (MAP) estimates of objects being looked at by an observed instructor. We test our model using a robotic system involving a pan-tilt camera head and show that combining saliency maps with gaze estimates leads to greater accuracy than using gaze alone. We additionally show that the system can learn instructor-specific probability distributions over objects, leading to increasing gaze accuracy over successive interactions with the instructor. Our results provide further support for probabilistic models of imitation and suggest new ways of implementing robotic systems that can interact with humans over an extended period of time.


international conference on robotics and automation | 2006

Learning humanoid motion dynamics through sensory-motor mapping in reduced dimensional spaces

Rawichote Chalodhorn; David B. Grimes; Gabriel Maganis; Rajesh P. N. Rao; Minoru Asada

Optimization of robot dynamics for a given human motion is an intuitive way to approach the problem of learning complex human behavior by imitation. In this paper, we propose a methodology based on a learning approach that performs optimization of humanoid dynamics in a low-dimensional subspace. We compactly represent the kinematic information of humanoid motion in a low dimensional subspace. Motor commands in the low dimensional subspace are mapped to the expected sensory feedback. We select optimal motor commands based on sensory-motor mapping that also satisfy our kinematic constraints. Finally, we obtain a set of novel postures that result in superior motion dynamics compared to the initial motion. We demonstrate results of the optimized motion on both a dynamics simulator and a real humanoid robot


Creating Brain-Like Intelligence | 2009

Learning Actions through Imitation and Exploration: Towards Humanoid Robots That Learn from Humans

David B. Grimes; Rajesh P. N. Rao

A prerequisite for achieving brain-like intelligence is the ability to rapidly learn new behaviors and actions. A fundamental mechanism for rapid learning in humans is imitation: children routinely learn new skills (e.g., opening a door or tying a shoe lace) by imitating their parents; adults continue to learn by imitating skilled instructors (e.g., in tennis). In this chapter, we propose a probabilistic framework for imitation learning in robots that is inspired by how humans learn from imitation and exploration. Rather than relying on complex (and often brittle) physics-based models, the robot learns a dynamic Bayesian network that captures its dynamics directly in terms of sensor measurements and actions during an imitation-guided exploration phase. After learning, actions are selected based on probabilistic inference in the learned Bayesian network. We present results demonstrating that a 25-degree-of-freedom humanoid robot can learn dynamically stable, full-body imitative motions simply by observing a human demonstrator.


international conference on robotics and automation | 2005

Probabilistic Gaze Imitation and Saliency Learning in a Robotic Head

Aaron P. Shon; David B. Grimes; Chris L. Baker; Matthew W. Hoffman; Shengli Zhou; Rajesh P. N. Rao

Imitation is a powerful mechanism for transferring knowledge from an instructor to a naïve observer, one that is deeply contingent on a state of shared attention between these two agents. In this paper we present Bayesian algorithms that implement the core of an imitation learning framework. We use gaze imitation, coupled with task-dependent saliency learning, to build a state of shared attention between the instructor and observer. We demonstrate the performance of our algorithms in a gaze following and saliency learning task implemented on an active vision robotic head. Our results suggest that the ability to follow gaze and learn instructor-and task-specific saliency models could play a crucial role in building systems capable of complex forms of human-robot interaction.


intelligent robots and systems | 2007

Learning full-body motions from monocular vision: dynamic imitation in a humanoid robot

Jeffrey B. Cole; David B. Grimes; Rajesh P. N. Rao

In an effort to ease the burden of programming motor commands for humanoid robots, a computer vision technique is developed for converting a monocular video sequence of human poses into stabilized robot motor commands for a humanoid robot. The human teacher wears a multi-colored body suit while performing a desired set of actions. Leveraging the colors of the body suit, the system detects the most probable locations of the different body parts and joints in the image. Then, by exploiting the known dimensions of the body suit, a user specified number of candidate 3D poses are generated for each frame. Using human to robot joint correspondences, the estimated 3D poses for each frame are then mapped to corresponding robot motor commands. An initial set of kinematically valid motor commands is generated using an approximate best path search through the pose candidates for each frame. Finally a learning-based probabilistic dynamic balance model obtains a dynamically stable imitative sequence of motor commands. We demonstrate the viability of the approach by presenting results showing full-body imitation of human actions by a Fujitsu HOAP-2 humanoid robot.


ieee-ras international conference on humanoid robots | 2005

Learning dynamic humanoid motion using predictive control in low dimensional subspaces

Rawichote Chalodhorn; David B. Grimes; Gabriel Maganis; Rajesh P. N. Rao

Imitation of complex human motion by a humanoid robot has long been recognized as an important problem in robotics. The problem is particularly difficult when body dynamics such as balance and stability must be taken into account during imitation. In this paper we present a framework applicable to the problem of imitating an input motion while simultaneously considering dynamic motion stability. Our framework leverages two main components. Firstly, dimensionality reduction techniques allow for efficient and compact state and control signal representations. Secondly, a learning-based predictive control architecture generates novel motions optimizing over expected sensory signals. We demonstrate results on modifying an input walking gait which allows for both faster and more stable walking


intelligent robots and systems | 2008

Learning nonparametric policies by imitation

David B. Grimes; Rajesh P. N. Rao

A long cherished goal in artificial intelligence has been the ability to endow a robot with the capacity to learn and generalize skills from watching a human teacher. Such an ability to learn by imitation has remained hard to achieve due to a number of factors, including the problem of learning in high-dimensional spaces and the problem of uncertainty. In this paper, we propose a new probabilistic approach to the problem of teaching a high degree-of-freedom robot (in particular, a humanoid robot) flexible and generalizable skills via imitation of a human teacher. The robot uses inference in a graphical model to learn sensor-based dynamics and infer a stable plan from a teacherpsilas demonstration of an action. The novel contribution of this work is a method for learning a nonparametric policy which generalizes a fixed action plan to operate over a continuous space of task variation. A notable feature of the approach is that it does not require any knowledge of the physics of the robot or the environment. By leveraging advances in probabilistic inference and Gaussian process regression, the method produces a nonparametric policy for sensor-based feedback control in continuous state and action spaces. We present experimental and simulation results using a Fujitsu HOAP-2 humanoid robot demonstrating imitation-based learning of a task involving lifting objects of different weights from a single human demonstration.

Collaboration


Dive into the David B. Grimes's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aaron P. Shon

University of Washington

View shared research outputs
Top Co-Authors

Avatar

Chris L. Baker

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Keith Grochow

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge