Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Aaron P. Shon is active.

Publication


Featured researches published by Aaron P. Shon.


Archive | 2007

Imitation and Social Learning in Robots, Humans and Animals: A Bayesian model of imitation in infants and robots

Rajesh P. N. Rao; Aaron P. Shon; Andrew N. Meltzoff

Learning through imitation is a powerful and versatile method for acquiring new behaviors. In humans, a wide range of behaviors, from styles of social interaction to tool use, are passed from one generation to another through imitative learning. Although imitation evolved through Darwinian means, it achieves Lamarckian ends: it is a mechanism for the inheritance of acquired characteristics. Unlike trial-and-error-based learning methods such as reinforcement learning, imitation allows rapid learning. The potential for rapid behavior acquisition through demonstration has made imitation learning an increasingly attractive alternative to manually programming robots. In this chapter, we review recent results on how infants learn through imitation and discuss Meltzoff and Moores four-stage progression of imitative abilities: (i) body babbling, (ii) imitation of body movements, (iii) imitation of actions on objects, and (iv) imitation based on inferring intentions of others. We formalize these four stages within a probabilistic framework for learning and inference. The framework acknowledges the role of internal models in sensorimotor control and draws on recent ideas from the field of machine learning regarding Bayesian inference in graphical models. We highlight two advantages of the probabilistic approach: (1) the development of new algorithms for imitation-based learning in robots acting in noisy and uncertain environments, and (2) the potential for using Bayesian methodologies (such as manipulation of prior probabilities) and robotic technologies to deepen our understanding of imitative learning in humans.


Neural Networks | 2010

''Social'' robots are psychological agents for infants: A test of gaze following

Andrew N. Meltzoff; Rechele Brooks; Aaron P. Shon; Rajesh P. N. Rao

Gaze following is a key component of human social cognition. Gaze following directs attention to areas of high information value and accelerates social, causal, and cultural learning. An issue for both robotic and infant learning is whose gaze to follow. The hypothesis tested in this study is that infants use information derived from an entitys interactions with other agents as evidence about whether that entity is a perceiver. A robot was programmed so that it could engage in communicative, imitative exchanges with an adult experimenter. Infants who saw the robot act in this social-communicative fashion were more likely to follow its line of regard than those without such experience. Infants use prior experience with the robots interactions as evidence that the robot is a psychological agent that can see. Infants want to look at what the robot is seeing, and thus shift their visual attention to the external target.


Neural Networks | 2006

2006 Special issue: A probabilistic model of gaze imitation and shared attention

Matthew W. Hoffman; David B. Grimes; Aaron P. Shon; Rajesh P. N. Rao

An important component of language acquisition and cognitive learning is gaze imitation. Infants as young as one year of age can follow the gaze of an adult to determine the object the adult is focusing on. The ability to follow gaze is a precursor to shared attention, wherein two or more agents simultaneously focus their attention on a single object in the environment. Shared attention is a necessary skill for many complex, natural forms of learning, including learning based on imitation. This paper presents a probabilistic model of gaze imitation and shared attention that is inspired by Meltzoff and Moores AIM model for imitation in infants. Our model combines a probabilistic algorithm for estimating gaze vectors with bottom-up saliency maps of visual scenes to produce maximum a posteriori (MAP) estimates of objects being looked at by an observed instructor. We test our model using a robotic system involving a pan-tilt camera head and show that combining saliency maps with gaze estimates leads to greater accuracy than using gaze alone. We additionally show that the system can learn instructor-specific probability distributions over objects, leading to increasing gaze accuracy over successive interactions with the instructor. Our results provide further support for probabilistic models of imitation and suggest new ways of implementing robotic systems that can interact with humans over an extended period of time.


international conference on robotics and automation | 2007

Towards a Real-Time Bayesian Imitation System for a Humanoid Robot

Aaron P. Shon; Joshua J. Storz; Rajesh P. N. Rao

Imitation learning, or programming by demonstration (PbD), holds the promise of allowing robots to acquire skills from humans with domain-specific knowledge, who nonetheless are inexperienced at programming robots. We have prototyped a real-time, closed-loop system for teaching a humanoid robot to interact with objects in its environment. The system uses nonparametric Bayesian inference to determine an optimal action given a configuration of objects in the world and a desired future configuration. We describe our prototype implementation, show imitation of simple motor acts on a humanoid robot, and discuss extensions to the system


International Journal of Humanoid Robotics | 2007

A COGNITIVE MODEL OF IMITATIVE DEVELOPMENT IN HUMANS AND MACHINES

Aaron P. Shon; Joshua J. Storz; Andrew N. Meltzoff; Rajesh P. N. Rao

Several algorithms and models have recently been proposed for imitation learning in humans and robots. However, few proposals offer a framework for imitation learning in noisy stochastic environments where the imitator must learn and act under real-time performance constraints. We present a novel probabilistic framework for imitation learning in stochastic environments with unreliable sensors. Bayesian algorithms, based on Meltzoff and Moores AIM hypothesis for action imitation, implement the core of an imitation learning framework. Our algorithms are computationally efficient, allowing real-time learning and imitation in an active stereo vision robotic head and on a humanoid robot. We present simulated and real-world robotics results demonstrating the viability of our approach. We conclude by advocating a research agenda that promotes interaction between cognitive and robotic studies of imitation.


international conference on robotics and automation | 2005

Probabilistic Gaze Imitation and Saliency Learning in a Robotic Head

Aaron P. Shon; David B. Grimes; Chris L. Baker; Matthew W. Hoffman; Shengli Zhou; Rajesh P. N. Rao

Imitation is a powerful mechanism for transferring knowledge from an instructor to a naïve observer, one that is deeply contingent on a state of shared attention between these two agents. In this paper we present Bayesian algorithms that implement the core of an imitation learning framework. We use gaze imitation, coupled with task-dependent saliency learning, to build a state of shared attention between the instructor and observer. We demonstrate the performance of our algorithms in a gaze following and saliency learning task implemented on an active vision robotic head. Our results suggest that the ability to follow gaze and learn instructor-and task-specific saliency models could play a crucial role in building systems capable of complex forms of human-robot interaction.


Neurocomputing | 2005

Implementing belief propagation in neural circuits

Aaron P. Shon; Rajesh P. N. Rao

There is growing evidence that neural circuits may employ statistical algorithms for inference and learning. Many such algorithms can be derived from independence diagrams (graphical models) showing causal relationships between random variables. A general algorithm for inference in graphical models is belief propagation, where nodes in a graphical model determine values for random variables by combining observed values with messages passed between neighboring nodes. We propose that small groups of synaptic connections between neurons in cortex correspond to causal dependencies in an underlying graphical model. Our results suggest a new probabilistic framework for computation in the neocortex.


Neurocomputing | 2003

Learning temporal patterns by redistribution of synaptic efficacy

Aaron P. Shon; Rajesh P. N. Rao

Abstract Recent experiments have shown that neocortical synapses exhibit both short-term plasticity and spike-timing dependent long-term plasticity. It has been suggested that changes in short-term plasticity are mediated by a redistribution of synaptic efficacy. Here we propose a simple model of the interaction between spike-timing dependent plasticity and short-term plasticity. We show that the model captures the synaptic behavior seen in experiments on redistribution of synaptic efficacy. Results from our simulations suggest that spike-timing dependent redistribution of synaptic efficacy offers neocortical neurons a potentially powerful mechanism for learning spatiotemporal patterns in the input stream.


Neurocomputing | 2005

Learning temporal clusters with synaptic facilitation and lateral inhibition

Chris L. Baker; Aaron P. Shon; Rajesh P. N. Rao

Short-term synaptic plasticity has been proposed as a way for cortical neurons to process temporal information. We present a model network that uses short-term plasticity to implement a temporal clustering algorithm. The models facilitory synapses learn temporal signals drawn from mixtures of nonlinear processes. Units in the model correspond to populations of cortical pyramidal cells arranged in columns; each column consists of neurons with similar spatiotemporal receptive fields. Clustering is based on mutual inhibition similar to Kohonens SOMs. A generalized expectation maximization (GEM) algorithm, guaranteed to increase model likelihood with each iteration, learns the synaptic parameters.


Neural Networks | 2010

Social robots are psychological agents for infants

Andrew N. Meltzoff; Rechele Brooks; Aaron P. Shon; Rajesh P. N. Rao

Gaze following is a key component of human social cognition. Gaze following directs attention to areas of high information value and accelerates social, causal, and cultural learning. An issue for both robotic and infant learning is whose gaze to follow. The hypothesis tested in this study is that infants use information derived from an entitys interactions with other agents as evidence about whether that entity is a perceiver. A robot was programmed so that it could engage in communicative, imitative exchanges with an adult experimenter. Infants who saw the robot act in this social-communicative fashion were more likely to follow its line of regard than those without such experience. Infants use prior experience with the robots interactions as evidence that the robot is a psychological agent that can see. Infants want to look at what the robot is seeing, and thus shift their visual attention to the external target.

Collaboration


Dive into the Aaron P. Shon's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David B. Grimes

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar

Chris L. Baker

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Keith Grochow

University of Washington

View shared research outputs
Top Co-Authors

Avatar

Rechele Brooks

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge