Erik Berger
Freiberg University of Mining and Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Erik Berger.
intelligent robots and systems | 2013
Heni Ben Amor; David Vogt; Marco Ewerton; Erik Berger; Bernhard Jung; Jan Peters
In this paper we present a new approach for learning responsive robot behavior by imitation of human interaction partners. Extending previous work on robot imitation learning, that has so far mostly concentrated on learning from demonstrations by a single actor, we simultaneously record the movements of two humans engaged in on-going interaction tasks and learn compact models of the interaction. Extracted interaction models can thereafter be used by a robot to engage in a similar interaction with a human partner. We present two algorithms for deriving interaction models from motion capture data as well as experimental results on a humanoid robot.
Advanced Robotics | 2015
Erik Berger; Mark Sastuba; David Vogt; Bernhard Jung; Heni Ben Amor
Physical human–robot interaction tasks require robots that can detect and react to external perturbations caused by the human partner. In this contribution, we present a machine learning approach for detecting, estimating, and compensating for such external perturbations using only input from standard sensors. This machine learning approach makes use of Dynamic Mode Decomposition (DMD), a data processing technique developed in the field of fluid dynamics, which is applied to robotics for the first time. DMD is able to isolate the dynamics of a nonlinear system and is therefore well suited for separating noise from regular oscillations in sensor readings during cyclic robot movements. In a training phase, a DMD model for behavior-specific parameter configurations is learned. During task execution, the robot must estimate the external forces exerted by a human interaction partner. We compare the DMD-based approach to other interpolation schemes. A variant, sparsity promoting DMD, is particularly well suited for high-noise sensors. Results of a user study show that our DMD-based machine learning approach can be used to design physical human–robot interaction techniques that not only result in robust robot behavior but also enjoy a high usability. Graphical Abstract
robot and human interactive communication | 2014
Erik Berger; Mark Sastuba; David Vogt; Bernhard Jung; Heni Ben Amor
In many settings, e.g. physical human-robot interaction, robotic behavior must be made robust against more or less spontaneous application of external forces. Typically, this problem is tackled by means of special purpose force sensors which are, however, not available on many robotic platforms. In contrast, we propose a machine learning approach suitable for more common, although often noisy sensors. This machine learning approach makes use of Dynamic Mode Decomposition (DMD) which is able to extract the dynamics of a nonlinear system. It is therefore well suited to separate noise from regular oscillations in sensor readings during cyclic robot movements under different behavior configurations. We demonstrate the feasibility of our approach with an example where physical forces are exerted on a humanoid robot during walking. In a training phase, a snapshot based DMD model for behavior specific parameter configurations is learned. During task execution the robot must detect and estimate the external forces exerted by a human interaction partner. We compare the DMD-based approach to other interpolation schemes and show that the former outperforms the latter particularly in the presence of sensor noise. We conclude that DMD which has so far been mostly used in other fields of science, particularly fluid mechanics, is also a highly promising method for robotics.
ieee-ras international conference on humanoid robots | 2013
Erik Berger; David Vogt; Nooshin Haji-Ghassemi; Bernhard Jung; Heni Ben Amor
In many cooperative tasks between a human and a robotic assistant, the human guides the robot by exerting forces, either through direct physical interaction or indirectly via a jointly manipulated object. These physical forces perturb the robots behavior execution and need to be compensated for in order to successfully complete such tasks. Typically, this problem is tackled by means of special purpose force sensors which are, however, not available on many robotic platforms. In contrast, we propose a machine learning approach based on sensor data, such as accelerometer and pressure sensor information. In the training phase, a statistical model of behavior execution is learned that combines Gaussian Process Regression with a novel periodic kernel. During behavior execution, predictions from the statistical model are continuously compared with stability parameters derived from current sensor readings. Differences between predicted and measured values exceeding the variance of the statistical model are interpreted as guidance information and used to adapt the robots behavior. Several examples of cooperative tasks between a human and a humanoid NAO robot demonstrate the feasibility of our approach.
intelligent virtual agents | 2014
David Vogt; Steve Grehl; Erik Berger; Heni Ben Amor; Bernhard Jung
We address the problem of creating believable animations for virtual humans that need to react to the body movements of a human interaction partner in real-time. Our data-driven approach uses prerecorded motion capture data of two interacting persons and performs motion adaptation during the live human-agent interaction. Extending the interaction mesh approach, our main contribution is a new scheme for efficient identification of motions in the prerecorded animation data that are similar to the live interaction. A global low-dimensional posture space serves to select the most similar interaction example, while local, more detail-rich posture spaces are used to identify poses closely matching the human motion. Using the interaction mesh of the selected motion example, an animation can then be synthesized that takes into account both spatial and temporal similarities between the prerecorded and live interactions.
intelligent robots and systems | 2014
Kevin Sebastian Luck; Gerhard Neumann; Erik Berger; Jan Peters; Heni Ben Amor
Learning motor skills for robots is a hard task. In particular, a high number of degrees-of-freedom in the robot can pose serious challenges to existing reinforcement learning methods, since it leads to a high-dimensional search space. However, complex robots are often intrinsically redundant systems and, therefore, can be controlled using a latent manifold of much smaller dimensionality. In this paper, we present a novel policy search method that performs efficient reinforcement learning by uncovering the low-dimensional latent space of actuator redundancies. In contrast to previous attempts at combining reinforcement learning and dimensionality reduction, our approach does not perform dimensionality reduction as a preprocessing step but naturally combines it with policy search. Our evaluations show that the new approach outperforms existing algorithms for learning motor skills with high-dimensional robots.
ieee-ras international conference on humanoid robots | 2014
Erik Berger; David Müller; David Vogt; Bernhard Jung; Heni Ben Amor
In physical human-robot interaction, robot behavior must be adjusted to forces applied by the human interaction partner. For measuring such forces, special-purpose sensors may be used, e.g. force-torque sensors, that are however often heavy, expensive and prone to noise. In contrast, we propose a machine learning approach for measuring external perturbations of robot behavior that uses commonly available, low-cost sensors only. During the training phase, behavior-specific statistical models of sensor measurements, so-called perturbation filters, are constructed using Principal Component Analysis, Transfer Entropy and Dynamic Mode Decomposition. During behavior execution, perturbation filters compare measured and predicted sensor values for estimating the amount and direction of forces applied by the human interaction partner. Such perturbation filters can therefore be regarded as virtual force sensors that produce continuous estimates of external forces.
international conference on robotics and automation | 2016
Erik Berger; Steve Grehl; David Vogt; Bernhard Jung; Heni Ben Amor
Robotic manipulation tasks often require the control of forces and torques exerted on external objects. This paper presents a machine learning approach for estimating forces when no force sensors are present on the robot platform. In the training phase, the robot executes the desired manipulation tasks under controlled conditions with systematically varied parameter sets. All internal sensor data, in the presented case from more than 100 sensors, as well as the force exerted by the robot are recorded. Using Transfer Entropy, a statistical model is learned that identifies the subset of sensors relevant for torque estimation in the given task. At runtime, the model is used to accurately estimate the torques exerted during manipulations of the demonstrated kind. The feasibility of the approach is shown in a setting where a robotic manipulator operates a torque wrench to fasten a screw nut. Torque estimates with an accuracy of well below ±1Nm are achieved. A strength of the presented model is that no prior knowledge of the robots kinematics, mass distribution or sensor instrumentation is required.
intelligent robots and systems | 2016
Erik Berger; David Vogt; Steve Grehl; Bernhard Jung; Heni Ben Amor
In order to ensure safe operation, robots must be able to reliably detect behavior perturbations that result from unexpected physical interactions with their environment and human co-workers. While some robots provide firmware force sensors that generate rough force estimates, more accurate force measurements are usually achieved with dedicated force-torque sensors. However, such sensors are often heavy, expensive and require an additional power supply. In the case of lightweight manipulators, the already limited payload capabilities may be reduced in a significant way. This paper presents an experience-based approach for accurately estimating external forces being applied to a robot without the need for a force-torque sensor. Using Information Transfer, a subset of sensors relevant to the executed behavior are identified from a larger set of internal sensors. Models mapping robot sensor data to force-torque measurements are learned using a neural network. These models can be used to predict the magnitude and direction of perturbations from affordable, proprioceptive sensors only. Experiments with a UR5 robot show that our method yields force estimates with accuracy comparable to a dedicated force-torque sensor. Moreover, our method yields a substantial improvement in accuracy over force-torque values provided by the robot firmware.
KI'09 Proceedings of the 32nd annual German conference on Advances in artificial intelligence | 2009
Heni Ben Amor; Erik Berger; David Vogt; Bernhard Jung