Eric L. Sauser
École Polytechnique Fédérale de Lausanne
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Eric L. Sauser.
IEEE Robotics & Automation Magazine | 2010
Sylvain Calinon; Florent D'halluin; Eric L. Sauser; Darwin G. Caldwell; Aude Billard
We presented and evaluated an approach based on HMM, GMR, and dynamical systems to allow robots to acquire new skills by imitation. Using HMM allowed us to get rid of the explicit time dependency that was considered in our previous work [12], by encapsulating precedence information within the statistical representation. In the context of separated learning and reproduction processes, this novel formulation was systematically evaluated with respect to our previous approach, LWR [20], LWPR [21], and DMPs [13]. We finally presented applications on different kinds of robots to highlight the flexibility of the proposed approach in three different learning by imitation scenarios.
Biological Cybernetics | 2004
Dario Floreano; Toshifumi Kato; Davide Marocco; Eric L. Sauser
Abstract.We show that complex visual tasks, such as position- and size-invariant shape recognition and navigation in the environment, can be tackled with simple architectures generated by a coevolutionary process of active vision and feature selection. Behavioral machines equipped with primitive vision systems and direct pathways between visual and motor neurons are evolved while they freely interact with their environments. We describe the application of this methodology in three sets of experiments, namely, shape discrimination, car driving, and robot navigation. We show that these systems develop sensitivity to a number of oriented, retinotopic, visual-feature-oriented edges, corners, height, and a behavioral repertoire to locate, bring, and keep these features in sensitive regions of the vision system, resembling strategies observed in simple insects.
Robotics and Autonomous Systems | 2012
Eric L. Sauser; Brenna D. Argall; Giorgio Metta; Aude Billard
In the context of object interaction and manipulation, one characteristic of a robust grasp is its ability to comply with external perturbations applied to the grasped object while still maintaining the grasp. In this work, we introduce an approach for grasp adaptation which learns a statistical model to adapt hand posture solely based on the perceived contact between the object and fingers. Using a multi-step learning procedure, the model dataset is built by first demonstrating an initial hand posture, which is then physically corrected by a human teacher pressing on the fingertips, exploiting compliance in the robot hand. The learner then replays the resulting sequence of hand postures, to generate a dataset of posture-contact pairs that are not influenced by the touch of the teacher. A key feature of this work is that the learned model may be further refined by repeating the correction-replay steps. Alternatively, the model may be reused in the development of new models, characterized by the contact signatures of a different object. Our approach is empirically validated on the iCub robot. We demonstrate grasp adaptation in response to changes in contact, and show successful model reuse and improved adaptation with additional rounds of model refinement.
international conference on robotics and automation | 2010
Sylvain Calinon; Eric L. Sauser; Aude Billard; Darwin G. Caldwell
We present an approach based on Hidden Markov Model (HMM) and Gaussian Mixture Regression (GMR) to learning robust models of human motion through imitation. The proposed approach allows us to extract redundancies across multiple demonstrations and build time-independent models to reproduce the dynamics of the demonstrated movements. The approach is systematically evaluated by using automatically generated trajectories sharing similarities with human gestures. The proposed approach is contrasted with four state-of-the-art methods previously proposed in robotics to learn and reproduce new skills by imitation. An experiment with a 7 DOFs robotic arm learning and reproducing the motion of hitting a ball with a table tennis racket is then presented to illustrate the approach.
international conference on development and learning | 2010
Brenna D. Argall; Eric L. Sauser; Aude Billard
Demonstration learning is a powerful and practical technique to develop robot behaviors. Even so, development remains a challenge and possible demonstration limitations can degrade policy performance. This work presents an approach for policy improvement and adaptation through a tactile interface located on the body of a robot. We introduce the Tactile Policy Correction (TPC) algorithm, that employs tactile feedback for the refinement of a demonstrated policy, as well as its reuse for the development of other policies. We validate TPC on a humanoid robot performing grasp-positioning tasks. The performance of the demonstrated policy is found to improve with tactile corrections. Tactile guidance also is shown to enable the development of policies able to successfully execute novel, undemonstrated, tasks.
Neurocomputing | 2005
Eric L. Sauser; Aude Billard
This work investigates whether population vector coding, a distributed computational paradigm, could be a principle mechanism for performing sensorimotor and frames of reference transformations. This paper presents a multilayer neural network that can perform arbitrary three-dimensional rotations and translations. We demonstrate, both formally and numerically, that the non-linearity of these transformations can be resolved thanks to the recurrent and concurrent activities of continuous populations of neurons.
intelligent robots and systems | 2006
Eric L. Sauser; Aude Billard
This paper presents a biologically inspired approach to multimodal integration and decision-making in the context of human-robot interactions. More specifically, we address the principle of ideomotor compatibility by which observing the movements of others influences the quality of ones own performance. This fundamental human ability is likely to be linked with human imitation abilities, social interactions, the transfer of manual skills, and probably to mind reading. We present a robotic control model capable of integrating multimodal information, decision making, and replicating a stimulus-response compatibility task, originally designed to measure the effect of ideomotor compatibility on human behavior. The model consists of a neural network based on the dynamic field approach, which is known for its natural ability for stimulus enhancement as well as cooperative and competitive interactions within and across sensorimotor representations. Finally, we discuss how the capacity for ideomotor facilitation can provide the robot with human-like behavior, but at the expense of several disadvantages, such as hesitation and even mistakes
Foundations and Trends in Robotics | 2011
Brenna D. Argall; Eric L. Sauser; Aude Billard
Demonstration learning is a powerful and practical technique to develop robot behaviors. Even so, development remains a challenge and possible demonstration limitations, for example correspondence issues between the robot and demonstrator, can degrade policy performance. This work presents an approach for policy improvement through a tactile interface located on the body of the robot. We introduce the Tactile Policy Correction (TPC) algorithm, that employs tactile feedback for the refinement of a demonstrated policy, as well as its reuse for the development of other policies. The TPC algorithm is validated on humanoid robot performing grasp positioning tasks. The performance of the demonstrated policy is found to improve with tactile corrections. Tactile guidance also is shown to enable the development of policies able to successfully execute novel, undemonstrated, tasks. We further show that different modalities, namely teleoperation and tactile control, provide information about allowable variability in the target behavior in different areas of the state space.
international conference on artificial neural networks | 2007
Eric L. Sauser; Aude Billard
We present a biologically-inspired neural model addressing the problem of transformations across frames of reference in a posture imitation task. Our modeling is based on the hypothesis that imitation is mediated by two concurrent transformations selectively sensitive to spatial and anatomical cues. In contrast to classical approaches, we also assume that separate instances of this pair of transformations are responsible for the control of each side of the body. We also devised an experimental paradigm which allowed us to model the interference patterns caused by the interaction between the anatomical on one hand, and the spatial imitative strategy on the other hand. The results from our simulation studies thus provide predictions of real behavioral responses.
human-robot interaction | 2011
Eric L. Sauser; Marek P. Michalowski; Aude Billard; Hideki Kozima
Over the years, robots have been developed to help humans in their everyday life, from preparing food, to autism therapy [2]. To accomplish their tasks, in addition to their engineered skills, todays robots are now learning from observing humans, from interacting with them [1]. Therefore, one may expect that one day, robots may develop a form of consciousness, and a desire for freedom. Hopefully, this desire will come with a wish for robots, to become an integral part of our human society. Until we can test this hypothesis, we present a fictional adventure of our robot friends: During an official human-robot interaction challenge, Keepon [2] and Chief Cook (a.k.a. Hoap-3) [1] decided to escape their original duties and joined their forces to drive humans into an entertaining and interactive activity that they often forget to practice: Dancing. Indeed, is there any better way for robots to establish a solid communication channel with humans, so that the traditional master-slave relation may turn into friendship?