Heni Ben Amor
Arizona State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Heni Ben Amor.
The International Journal of Robotics Research | 2013
Zhikun Wang; Katharina Mülling; Marc Peter Deisenroth; Heni Ben Amor; David Vogt; Bernhard Schölkopf; Jan Peters
Intention inference can be an essential step toward efficient human–robot interaction. For this purpose, we propose the Intention-Driven Dynamics Model (IDDM) to probabilistically model the generative process of movements that are directed by the intention. The IDDM allows the intention to be inferred from observed movements using Bayes’ theorem. The IDDM simultaneously finds a latent state representation of noisy and high-dimensional observations, and models the intention-driven dynamics in the latent states. As most robotics applications are subject to real-time constraints, we develop an efficient online algorithm that allows for real-time intention inference. Two human–robot interaction scenarios, i.e. target prediction for robot table tennis and action recognition for interactive humanoid robots, are used to evaluate the performance of our inference algorithm. In both intention inference tasks, the proposed algorithm achieves substantial improvements over support vector machines and Gaussian processes.
IEEE Robotics & Automation Magazine | 2012
Shuhei Ikemoto; Heni Ben Amor; Takashi Minato; Bernhard Jung; Hiroshi Ishiguro
Close physical interaction between robots and humans is a particularly challenging aspect of robot development. For successful interaction and cooperation, the robot must have the ability to adapt its behavior to the human counterpart. Based on our earlier work, we present and evaluate a computationally efficient machine learning algorithm that is well suited for such close-contact interaction scenarios. We show that this algorithm helps to improve the quality of the interaction between a robot and a human caregiver. To this end, we present two human-in-the-loop learning scenarios that are inspired by human parenting behavior, namely, an assisted standing-up task and an assisted walking task.
international conference on robotics and automation | 2014
Heni Ben Amor; Gerhard Neumann; Sanket Kamthe; Oliver Kroemer; Jan Peters
To engage in cooperative activities with human partners, robots have to possess basic interactive abilities and skills. However, programming such interactive skills is a challenging task, as each interaction partner can have different timing or an alternative way of executing movements. In this paper, we propose to learn interaction skills by observing how two humans engage in a similar task. To this end, we introduce a new representation called Interaction Primitives. Interaction primitives build on the framework of dynamic motor primitives (DMPs) by maintaining a distribution over the parameters of the DMP. With this distribution, we can learn the inherent correlations of cooperative activities which allow us to infer the behavior of the partner and to participate in the cooperation. We will provide algorithms for synchronizing and adapting the behavior of humans and robots during joint physical activities.
genetic and evolutionary computation conference | 2005
Heni Ben Amor; Achim Rettinger
Exploration vs. exploitation is a well known issue in Evolutionary Algorithms. Accordingly, an unbalanced search can lead to premature convergence. GASOM, a novel Genetic Algorithm, addresses this problem by intelligent exploration techniques. The approach uses Self-Organizing Maps to mine data from the evolution process. The information obtained is successfully utilized to enhance the search strategy and confront genetic drift. This way, local optima are avoided and exploratory power is maintained. The evaluation of GASOM on well known problems shows that it effectively prevents premature convergence and seeks the global optimum. Particularly on deceptive and misleading functions it showed outstanding performance. Additionally, representing the search history by the Self-Organizing Map provides a visually pleasing insight into the state and course of evolution.
ieee virtual reality conference | 2007
Guido Heumer; Heni Ben Amor; Matthias Weber; Bernhard Jung
This paper presents a comparison of various classification methods for the problem of recognizing grasp types involved in object manipulations performed with a data glove. Conventional wisdom holds that data gloves need calibration in order to obtain accurate results. However, calibration is a time-consuming process, inherently user-specific, and its results are often not perfect. In contrast, the present study aims at evaluating recognition methods that do not require prior calibration of the data glove, by using raw sensor readings as input features and mapping them directly to different categories of hand shapes. An experiment was carried out, where test persons wearing a data glove had to grasp physical objects of different shapes corresponding to the various grasp types of the Schlesinger taxonomy. The collected data was analyzed with 28 classifiers including different types of neural networks, decision trees, Bayes nets, and lazy learners. Each classifier was analyzed in six different settings, representing various application scenarios with differing generalization demands. The results of this work are twofold: (1) We show that a reasonably well to highly reliable recognition of grasp types can be achieved - depending on whether or not the glove user is among those training the classifier - even with uncalibrated data gloves. (2) We identify the best performing classification methods for recognition of various grasp types. To conclude, cumbersome calibration processes before productive usage of data gloves can be spared in many situations.
intelligent robots and systems | 2012
Heni Ben Amor; Oliver Kroemer; Ulrich Hillenbrand; Gerhard Neumann; Jan Peters
Multi-fingered robot grasping is a challenging problem that is difficult to tackle using hand-coded programs. In this paper we present an imitation learning approach for learning and generalizing grasping skills based on human demonstrations. To this end, we split the task of synthesizing a grasping motion into three parts: (1) learning efficient grasp representations from human demonstrations, (2) warping contact points onto new objects, and (3) optimizing and executing the reach-and-grasp movements. We learn low-dimensional latent grasp spaces for different grasp types, which form the basis for a novel extension to dynamic motor primitives. These latent-space dynamic motor primitives are used to synthesize entire reach-and-grasp movements. We evaluated our method on a real humanoid robot. The results of the experiment demonstrate the robustness and versatility of our approach.
international conference on robotics and automation | 2015
Marco Ewerton; Gerhard Neumann; Rudolf Lioutikov; Heni Ben Amor; Jan Peters; Guilherme Maeda
Robots that interact with humans must learn to not only adapt to different human partners but also to new interactions. Such a form of learning can be achieved by demonstrations and imitation. A recently introduced method to learn interactions from demonstrations is the framework of Interaction Primitives. While this framework is limited to represent and generalize a single interaction pattern, in practice, interactions between a human and a robot can consist of many different patterns. To overcome this limitation this paper proposes a Mixture of Interaction Primitives to learn multiple interaction patterns from unlabeled demonstrations. Specifically the proposed method uses Gaussian Mixture Models of Interaction Primitives to model nonlinear correlations between the movements of the different agents. We validate our algorithm with two experiments involving interactive tasks between a human and a lightweight robotic arm. In the first, we compare our proposed method with conventional Interaction Primitives in a toy problem scenario where the robot and the human are not linearly correlated. In the second, we present a proof-of-concept experiment where the robot assists a human in assembling a box.
ieee-ras international conference on humanoid robots | 2014
Guilherme Maeda; Marco Ewerton; Rudolf Lioutikov; Heni Ben Amor; Jan Peters; Gerhard Neumann
This paper proposes a probabilistic framework based on movement primitives for robots that work in collaboration with a human coworker. Since the human coworker can execute a variety of unforeseen tasks a requirement of our system is that the robot assistant must be able to adapt and learn new skills on-demand, without the need of an expert programmer. Thus, this paper leverages on the framework of imitation learning and its application to human-robot interaction using the concept of Interaction Primitives (IPs). We introduce the use of Probabilistic Movement Primitives (ProMPs) to devise an interaction method that both recognizes the action of a human and generates the appropriate movement primitive of the robot assistant. We evaluate our method on experiments using a lightweight arm interacting with a human partner and also using motion capture trajectories of two humans assembling a box. The advantages of ProMPs in relation to the original formulation for interaction are exposed and compared.
robotics science and systems | 2012
Zhikun Wang; Marc Peter Deisenroth; Heni Ben Amor; David Vogt; Bernhard Schölkopf; Jan Peters
Inference of human intention may be an essential step towards understanding human actions and is hence important for realizing efficient human-robot interaction. In this paper, we propose the Intention-Driven Dynamics Model (IDDM), a latent variable model for inferring unknown human intentions. We train the model based on observed human movements/actions. We introduce an efficient approximate inference algorithm to infer the human’s intention from an ongoing movement. We verify the feasibility of the IDDM in two scenarios, i.e., target inference in robot table tennis and action recognition for interactive humanoid robots. In both tasks, the IDDM achieves substantial improvements over state-of-the-art regression and classification.
intelligent robots and systems | 2012
Herke van Hoof; Oliver Kroemer; Heni Ben Amor; Jan Peters
Creating robots that can act autonomously in dynamic, unstructured environments is a major challenge. In such environments, learning to recognize and manipulate novel objects is an important capability. A truly autonomous robot acquires knowledge through interaction with its environment without using heuristics or prior information encoding human domain insights. Static images often provide insufficient information for inferring the relevant properties of the objects in a scene. Hence, a robot needs to explore these objects by interacting with them. However, there may be many exploratory actions possible, and a large portion of these actions may be non-informative. To learn quickly and efficiently, a robot must select actions that are expected to have the most informative outcomes. In the proposed bottom-up approach, the robot achieves this goal by quantifying the expected informativeness of its own actions. We use this approach to segment a scene into its constituent objects as a first step in learning the properties and affordances of objects. Evaluations showed that the proposed information-theoretic approach allows a robot to efficiently infer the composite structure of its environment.