Markus Ehrenmann
Karlsruhe Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Markus Ehrenmann.
Archive | 2000
Ruediger Dillmann; Oliver Rogalla; Markus Ehrenmann; R. Zöliner; M. Bordegoni
Service robots require easy programming methods allowing the unexperienced human user to easily integrate motion and perception skills or complex problem solving strategies. To achieve this goal, robots should learn from operators how and what to do considering hard- and software constraints. Various approaches modelling the man-machine skill transfer have been proposed. Systems following the Programming by Demonstration (PbD) paradigm that were developed within the last decade are getting closer to this goal. However, most of these systems lack the possibility for the user to supervise and influence the process of program generation after the initial demonstration was performed. In this paper a principle learning methodology is discussed, which allows to transfer human skills and to supervise the learning process including subsymbolic and symbolic task knowledge. Here, several existing approaches will be discussed and compared to each other. Moreover, a system approach is presented, integrating the overall process of skill transfer from a human to a robotic manipulation system. One major goal is to modify information gained by the demonstration in that way that different target systems are supported. The resulting PbD-system yields towards a hybrid learning approach in robotics to support natural programming based on human demonstrations and user advice.
robot and human interactive communication | 2002
Oliver Rogalla; Markus Ehrenmann; Raoul Zöllner; R. Becher; Ruediger Dillmann
Giving advice to a mobile robot assistant still requires classical user interfaces. A more intuitive way of commanding is achieved by verbal or gesture commands. In this article, we present new approaches and enhancements for established methods that are in use in our laboratory. Our aim is to interact with a robot using natural and direct communication techniques to facilitate robust performance of simple tasks. Within this paper, we describe the robots vision and speech recognition system. Then, we display robot control for selecting the appropriate robot reaction for solving basic manipulation tasks.
robot and human interactive communication | 2002
Markus Ehrenmann; Raoul Zöllner; Oliver Rogalla; Ruediger Dillmann
Robot assistants will only reach a mass consumer market when they are easy to use. This applies especially to the way a user programs his robot system. The only approach that enables a non-expert robot user to teach a system complex tasks is programming by demonstration. This paper explains the basic concepts for mapping typical human actions performed in a household to a robot system: the recognition of the particular user actions, the task representation and the mapping strategy itself. The execution of a mapped program can then be performed on a real robot. An experiment is presented that was carried out concerning a table laying task and proving the feasibility of this approach.
intelligent robots and systems | 1998
Oliver Rogalla; Markus Ehrenmann; Rüdiger Dillmann
Since programming by demonstration (PbD) approaches have reached prime importance in interactive robot programming, sensor technology for tracking user actions and user behavior have become more and more important. However, traditional methods based on single sensor system input are already at their limits, since PbD is an application area where real-time requirement do play an important role. Thus, a sensor fusion approach is proposed which serves input data for a finite state automaton. The input sources are on the one hand a data-glove which classifies different grips and on the other hand a movable camera head which tracks the movements of the data glove as well as estimates object positions. Both sensor sources use time efficient algorithms, since sensory data must be processed in real-time. The efficiency of this approach is proven in a PbD environment were flexible infusion bags are handled and respective actions and object positions are recognized.
international conference on multisensor fusion and integration for intelligent systems | 2001
Markus Ehrenmann; Raoul Zöllner; Steffen Knoop; Riidiger Dillmann
Good observation of a manipulation presentation performed by a human teacher is crucial to further processing steps in programming by demonstration which is of prime importance in interactive robot programming. This paper outlines a sensor fusion concept for hand action tracking by observing the hand posture, position and applied forces. The input sources include: a data glove which classifies several gestures and grasps, a stereo camera mounted head and several force sensors fitted on the finger tips. The hardware used is presented as well as the first implementation of measurement and fusion approaches. Accuracies of the experiment are also given.
international conference on multisensor fusion and integration for intelligent systems | 1999
Markus Ehrenmann; Peter Steinhaus; Riidiger Dillmann
Good observation of a manipulation presentation performed by a human teacher is crucial to further processing steps in programming by demonstration in interactive robot programming. This paper outlines a concept of how this can be done using visual and finger measuring sensors. The input sources include: a data glove which classifies several gestures and grasps, an active stereo vision, and a fixed ceiling camera. The hardware used is presented together with the technical concepts of processing and the acquired sensor information fusion, so called elementary cognitive operators. All the sensor sources use time-efficient algorithms, since sensor data must be processed in real-time.
autonome mobile systeme fachgespräch | 1988
Peter Steinhaus; Markus Ehrenmann; Rüdiger Dillmann
Autonome Mobile Systeme besitzen immer noch nicht die Fahigkeit, lange Strecken in dynamischen Umgebungen schnell und effizient zuruckzulegen, da sie mit ihren mobilen Sensoren nur einen sehr eingeschrankten Teil der Umwelt erfassen und in ihre Planung ein¬beziehen konnen. In dieser Arbeit stellen die Autoren den Entwurf ei¬ner skalierbaren Architektur vor, die durch kombinierten Einsatz von festen globalen und mobilen Sensoren eine hinreichend schnelle und um-fassende Umwelt modellierung und eine darauf aufbauende Wegplanung ermoglicht, ohne dabei die Nachteile einer monolithischen Struktur wie zum Beispiel zu groser Suchraume bzw. Ubertragungsengpasse zu besitzen. Die Architektur beruht auf einem verteilten Ansatz und gestattet Multisensorfusion.
Archive | 2000
Markus Ehrenmann; Tobias Lütticke; Rüdiger Dillmann
Mobile Robotersystetne werden heute meist uber graphische Oberflachen in Standard-PCs, PDAs oder Teachpanel kommandiert. Auditive und gestenbasierte Kommandierungen sind seit wenigen Jahren hochaktuelle Forschungsthemen mit dem Ziel, die Schnittstellen zwischen Menschen und Maschinen direkter und intuitiver zu gestalten. Entsprechend den Bewegungen, die Menschen beim Einweisen von Fahrzeugen machen, wird am Institut fur Prozessrechentechnik, Automation und Robotik der Einsatz dynamischer Gesten zur Anweisung einer mobilen Plattform untersucht. „Dynamisch“ soll hierbei bedeuten, dass bei der Interpretation der Benutzerhandlungen hier ausschliesslich die Verfahrbahn einer der beiden Hande bedeutungstragend ist. Die Gelenkstellungen von Fingern und Handgelenken fliesen nicht in die Gestenklassifikation ein.
Archive | 2003
Markus Ehrenmann; Shoji Tajima; Oliver Rogalla; Stefan Vacek; Rüdiger Dillmann
Archive | 1999
Holger Friedrich; Vera Grossmann; Markus Ehrenmann; Oliver Rogalla; Raoul D. Zoellner; Rüdiger Dillmann