Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rainer Jäkel is active.

Publication


Featured researches published by Rainer Jäkel.


international conference on robotics and automation | 2010

Representation and constrained planning of manipulation strategies in the context of Programming by Demonstration

Rainer Jäkel; Sven R. Schmidt-Rohr; Martin Lösch; Rüdiger Dillmann

In Programming by Demonstration, a flexible representation of manipulation motions is necessary to learn and generalize from human demonstrations. In contrast to subsymbolic representations of trajectories, e.g. based on a Gaussian Mixture Model, a partially symbolic representation of manipulation strategies based on a temporal satisfaction problem with domain constraints is developed. By using constrained motion planning and a geometric constraint representation, generalization to different robot systems and new environments is achieved. In order to plan learned manipulation strategies the RRT-based algorithm by Stilman et al. is extended to consider, that multiple sets of constraints are possible during the extension of the search tree.


Künstliche Intelligenz | 2010

Advances in Robot Programming by Demonstration

Rüdiger Dillmann; Tamim Asfour; Martin Do; Rainer Jäkel; Alexander Kasper; Pedram Azad; Ales Ude; Sven R. Schmidt-Rohr; Martin Lösch

Robot Programming by Demonstration (PbD) has been dealt with in the literature as a promising way to teach robots new skills in an intuitive way. In this paper we describe our current work in the field toward the implementation of PbD system which allows robots to learn continuously from human observation, build generalized representations of human demonstration and apply such representations to new situations.


international conference on advanced robotics | 2011

Using spatial relations of objects in real world scenes for scene structuring and scene understanding

Alexander Kasper; Rainer Jäkel; Rüdiger Dillmann

Given a room full of individual objects in a generic household scene, one can observe that the objects are mostly not placed randomly but in a certain order. Because of this each object can be described by the surrounding objects and the spatial relations to those objects. This paper presents several types of spatial relationships that can be deduced using object positions in 3D as well as an approach to retrieve these relations from real world scenes via annotation of colored 3D pointclouds gathered with a sensor. Finally, a way to use this data to make predictions about an unknown object based on its surrounding objects is presented.


ieee-ras international conference on humanoid robots | 2010

Learning of generalized manipulation strategies in the context of Programming by Demonstration

Rainer Jäkel; Sven R. Schmidt-Rohr; Martin Lösch; Alexander Kasper; Rüdiger Dillmann

In Programming by Demonstration, abstract manipulation knowledge has to be learned, that can be used by an autonomous robot system in different environments with arbitrary obstacles. In this work, manipulation strategies are learned by observation of a human teacher and represented as a flexible, constraint-based representation of the search space for motion planning. The learned manipulation strategy contains a large set of automatically generated features, which are generalized using additional demonstrations of the teacher. The generalized manipulation strategy is executed on a real bimanual anthropomorphic robot system in different environments with arbitrary obstacles using constrained motion planning.


international conference on robotics and automation | 2010

Learning of probabilistic grasping strategies using Programming by Demonstration

Rainer Jäkel; Sven R. Schmidt-Rohr; Zhixing Xue; Martin Lösch; Rüdiger Dillmann

The planning of grasping motions is demanding due to the complexity of modern robot systems. In Programming by Demonstration, the observation of a human teacher allows to draw additional information about grasping strategies. Rosell showed, that the motion planning problem can be simplified by globally restricting the set of valid configurations to a learned subspace. In this work, the transformation of a humanoid grasping strategy to an anthropomorphic robot system is described by a probabilistic model, called variation model, in order to account for modeling and transformation errors. The variation model resembles a soft preference for grasping motions similar to the demonstration and therefore induces a non-uniform sampling distribution on the configuration space. The sampling distribution is used in a standard probabilistic motion planner to plan grasping motions efficiently for new objects in new environments.


International Journal of Social Robotics | 2012

Learning of Planning Models for Dexterous Manipulation Based on Human Demonstrations

Rainer Jäkel; Sven R. Schmidt-Rohr; Steffen W. Rühl; Alexander Kasper; Zhixing Xue; Rüdiger Dillmann

In the human environment service robots have to be able to manipulate autonomously a large variety of objects in a workspace restricted by collisions with obstacles, self-collisions and task constraints. Planning enables the robot system to generalize predefined or learned manipulation knowledge to new environments. For dexterous manipulation tasks the manual definition of planning models is time-consuming and error-prone. In this work, planning models for dexterous tasks are learned based on multiple human demonstrations using a general feature space including automatically generated contact constraints, which are automatically relaxed to consider the correspondence problem. In order to execute the learned planning model with different objects, the contact location is transformed to given object geometry using morphing. The initial, overspecialized planning model is generalized using a previously described, parallelized optimization algorithm with the goal to find a maximal subset of task constraints, which admits a solution to a set of test problems. Experiments on two different, dexterous tasks show the applicability of the learning approach to dexterous manipulation tasks.


intelligent robots and systems | 2011

Distributed generalization of learned planning models in robot programming by demonstration

Rainer Jäkel; Pascal Meissner; Sven R. Schmidt-Rohr; Rüdiger Dillmann

In Programming by Demonstration (PbD), one of the key problems for autonomous learning is to automatically extract the relevant features of a manipulation task, which has a significant impact on the generalization capabilities. In this paper, task features are encoded as constraints of a learned planning model. In order to extract the relevant constraints, the human teacher demonstrates a set of tests, e.g. a scene with different objects, and the robot tries to execute the planning model on each test using constrained motion planning. Based on statistics about which constraints failed during the planning process multiple hypotheses about a maximal subset of constraints, which allows to find a solution in all tests, are refined in parallel using an evolutionary algorithm. The algorithm was tested on 7 experiments and two robot systems.


intelligent robots and systems | 2010

Programming by demonstration of probabilistic decision making on a multi-modal service robot

Sven R. Schmidt-Rohr; Martin Lösch; Rainer Jäkel; Rüdiger Dillmann

In this paper we propose a process which is able to generate abstract service robot mission representations, utilized during execution for autonomous, probabilistic decision making, by observing human demonstrations. The observation process is based on the same perceptive components as used by the robot during execution, recording dialog between humans, human motion as well as objects poses. This leads to a natural, practical learning process, avoiding extra demonstration centers or kinesthetic teaching. By generating mission models for probabilistic decision making as Partially observable Markov decision processes, the robot is able to deal with uncertain and dynamic environments, as encountered in real world settings during execution. Service robot missions in a cafeteria setting, including the modalities of mobility, natural human-robot interaction and object grasping, have been learned and executed by this system.


intelligent robots and systems | 2013

Context aware shared autonomy for robotic manipulation tasks

Thomas Witzig; J. Marius Zöllner; Dejan Pangercic; Sarah Osentoski; Rainer Jäkel; Rüdiger Dillmann

This paper describes a collaborative human-robot system that provides context information to enable more effective robotic manipulation. We take advantage of the semantic knowledge of a human co-worker who provides additional context information and interacts with the robot through a user interface. A Bayesian Network encodes the dependencies between this information provided by the user. The output of this model generates a ranked list of grasp poses best suitable for a given task which is then passed to the motion planner. Our system was implemented in ROS and tested on a PR2 robot. We compared the system to state-of-the-art implementations using quantitative (e.g. success rate, execution times) as well as qualitative (e.g. user convenience, cognitive load) metrics. We conducted a user study in which eight subjects were asked to perform a generic manipulation task, for instance to pour a bottle or move a cereal box, with a set of state-of-the-art shared autonomy interfaces. Our results indicate that an interface which is aware of the context provides benefits not currently provided by other state-of-the-art implementations.


international conference on advanced robotics | 2013

Recognizing scenes with hierarchical Implicit Shape Models based on spatial object relations for Programming by Demonstration

Pascal Meissner; Reno Reckling; Rainer Jäkel; Sven R. Schmidt-Rohr; Rüdiger Dillmann

We present an approach for recognizing scenes, consisting of spatial relations between objects, in unstructured indoor environments, which change over time. Object relations are represented by full six Degree-of-Freedom (DoF) coordinate transformations between objects. They are acquired from object poses that are visually perceived while people demonstrate actions that are typically performed in a given scene. We recognize scenes using an Implicit Shape Model (ISM) that is similar to the Generalized Hough Transform. We extend it to take orientations between objects into account. This includes a verification step that allows us to infer not only the existence of scenes, but also the objects they are composed of. ISMs are restricted to represent scenes as star topologies of relations, which insufficiently approximate object relations in complex dynamic settings. False positive detections may occur. Our solution are exchangeable heuristics for recognizing object relations that have to be represented explicitly in separate ISMs. Object relations are modeled by the ISMs themselves. We use hierarchical agglomerative clustering, employing the heuristics, to construct a tree of ISMs. Learning and recognition of scenes with a single ISM is naturally extended to multiple ISMs.

Collaboration


Dive into the Rainer Jäkel's collaboration.

Top Co-Authors

Avatar

Rüdiger Dillmann

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Sven R. Schmidt-Rohr

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Martin Lösch

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Pascal Meissner

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Alexander Kasper

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Zhixing Xue

Forschungszentrum Informatik

View shared research outputs
Top Co-Authors

Avatar

Fabian Romahn

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Steffen W. Rühl

Forschungszentrum Informatik

View shared research outputs
Top Co-Authors

Avatar

Ales Ude

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Dirk Mayer

Karlsruhe Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge