Christof Elbrechter
Bielefeld University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Christof Elbrechter.
international conference on robotics and automation | 2009
Ingo Lütkebohle; Julia Peltason; Lars Schillingmann; Britta Wrede; Sven Wachsmuth; Christof Elbrechter; Robert Haschke
If robots are to succeed in novel tasks, they must be able to learn from humans. To improve such human-robot interaction, a system is presented that provides dialog structure and engages the human in an exploratory teaching scenario. Thereby, we specifically target untrained users, who are supported by mixed-initiative interaction using verbal and non-verbal modalities. We present the principles of dialog structuring based on an object learning and manipulation scenario. System development is following an interactive evaluation approach and we will present both an extensible, event-based interaction architecture to realize mixed-initiative and evaluation results based on a video-study of the system. We show that users benefit from the provided dialog structure to result in predictable and successful human-robot interaction.
Künstliche Intelligenz | 2010
Jonathan Maycock; Daniel Dornbusch; Christof Elbrechter; Robert Haschke; Thomas Schack; Helge Ritter
Grasping and manual interaction for robots so far has largely been approached with an emphasis on physics and control aspects. Given the richness of human manual interaction, we argue for the consideration of the wider field of “manual intelligence” as a perspective for manual action research that brings the cognitive nature of human manual skills to the foreground. We briefly sketch part of a research agenda along these lines, argue for the creation of a manual interaction database as an important cornerstone of such an agenda, and describe the manual interaction lab recently set up at CITEC to realize this goal and to connect the efforts of robotics and cognitive science researchers towards making progress for a more integrated understanding of manual intelligence.
ieee-ras international conference on humanoid robots | 2010
Jan Frederik Steffen; Christof Elbrechter; Robert Haschke; Helge Ritter
We consider the complex task of coordinating two five-fingered anthropomorphic robot hands for taking a jar passed from a human user and unscrewing its cap. Using a pair of 7-DOF redundant arms for operating the hands, we study how the incorporation of human movement strategies at the finger and arm levels can aid in the solution of the overall bimanual task. At the finger level, we employ a finger control manifold for the unscrewing motion that has been synthesized with a kernel approach applied to human motion data captured with a data glove. At the arm level, we use a heuristic motivated from the observation of human arm movements to enhance the space of pass-over configurations that the system can successfully handle. In addition, we provide a brief description of the architecture of the overall system that comprises 54 motor degrees of freedom and integrates camera vision, arm and finger control as well as a speech output component for interaction with the human user.
ieee-ras international conference on humanoid robots | 2012
Christof Elbrechter; Robert Haschke; Helge Ritter
The ability to manipulate deformable objects, such as textiles or paper, is a major prerequisite to bringing the capabilities of articulated robot hands closer to the level of manual intelligence exhibited by humans. We concentrate on the manipulation of paper, which affords us a rich interaction domain that has not yet been solved for anthropomorphic robot hands. Robust tracking and physically plausible modeling of the paper as well as feedback based robot control are crucial components for this task. This paper makes two novel contributions to this area. The first concerns real-time modeling and visual tracking. Our technique not only models the bending of a sheet of paper, but also paper crease lines which allows us to monitor deformations. The second contribution concerns enabling an anthropomorphic robot to fold paper, and is accomplished by introducing a set of tactile- and vision-based closed loop controllers.
intelligent robots and systems | 2012
André Ückermann; Christof Elbrechter; Robert Haschke; Helge Ritter
We present an algorithm to segment an unstructured table top scene. Operating on the depth image of a Kinect camera, the algorithm robustly separates objects of previously unknown shape in cluttered scenes of stacked and partially occluded objects. The model-free algorithm finds smooth surface patches which are subsequently combined to form object hypotheses. We evaluate the algorithm regarding its robustness and real-time capabilities and discuss its advantages compared to existing approaches as well as its weak spots to be addressed in future work. We also report on an autonomous grasping experiment with the Shadow Robot Hand which employs the estimated shape and pose of segmented objects.
ieee-ras international conference on humanoid robots | 2012
Matthias Schröder; Christof Elbrechter; Jonathan Maycock; Robert Haschke; Mario Botsch; Helge Ritter
We extend a recent low cost real-time method of hand tracking and pose estimation in order to control an anthropomorphic robot hand. The approach is data-driven and based on matching the current image of a color-gloved hand with the best fitting image in a database to retrieve the posture. Then, using depth information from a Kinect camera and a color-sensitive iterative closest point-to-triangle algorithm we can very accurately estimate the absolute position and orientation of the hand. The effectiveness of the approach is demonstrated in an application in which we actively control a 20 DOF anthropomorphic robot hand in a manual interaction grasping task.
intelligent robots and systems | 2011
Christof Elbrechter; Robert Haschke; Helge Ritter
The ability to manipulate deformable objects, such as textiles or paper, is a major prerequisite to bringing the capabilities of articulated robot hands closer to the level of manual intelligence exhibited by humans. We concentrate on the manipulation of paper, which affords us a rich interaction domain and that has not yet been solved for anthropomorphic robot hands. A key ability needed for this is the robust tracking and modelling of paper under conditions of occlusion and strong deformation. We present a marker based framework that realizes these properties robustly and in real-time. We compare a purely mathematical representation of the paper manifold with a soft-body-physics model and demonstrate the use of our visual tracking method to facilitate the coordination of two anthropomorphic 20 DOF Shadow Dexterous Hands while they grasp a flat-lying piece of paper, using a combination of visually guided bulging and pinching.
intelligent robots and systems | 2013
Qiang Li; Christof Elbrechter; Robert Haschke; Helge Ritter
We propose a feedback-based solution for the accurate manipulation of an unknown object in hand. This method does not explicitly models friction and surface geometry details, but employs a fast feedback loop based on visual and tactile feedback to perform robust manipulation even in the presence of unexpected slippage or rolling. At every control step, fingertip motions are computed to realize the intended object relocation, employing a composite position/force controller. Subsequently inverse hand kinematics is employed to retrieve joint-level motions, which are implemented on the robot with a position servo loop. We evaluate our method on a setup of two KUKA robot arms, each equipped with a tactile sensor array as end-effectors to perform the object manipulation task. The experimental results show the feasibility of our proposed method, even in presence of slippage or external disturbances.
Towards Service Robots for Everyday Environments | 2012
Ingo Lütkebohle; Julia Peltason; Lars Schillingmann; Christof Elbrechter; Sven Wachsmuth; Britta Wrede; Robert Haschke
Integrating the components described in the previous articles of this chapter, we introduce the Bielefeld “Curious Robot”, which is able to acquire new knowledge and skills in direct human-robot interaction. This paper focuses on the cognitive architecture of the overall system. We propose to combine (i) a communication layer based on a generic, human-accessible XML data format, (ii) multiple low-level sensor and control processes publishing their sensor information into the system and receiving commands or parameterizations from higher-level deliberative processes, and (iii) high-level coordination processes based on hierarchical state machines. The efficiency of the proposed approach is shown in an interactive tutoring scenario, where the Bielefeld “Curious Robot”, a bimanual robot system, should learn to identify, grasp, and clean various everyday objects from a table. The capability of the system to interact with lay persons is proven in a user study.
Künstliche Intelligenz | 2017
Alexander Neumann; Christof Elbrechter; Nadine Pfeiffer-Leßmann; Risto Kõiva; Birte Carlmeyer; Stefan Rüther; Michael Schade; André Ückermann; Sven Wachsmuth; Helge Ritter
Cooking is a complex activity of daily living that requires intuition, coordination, multitasking and time-critical planning abilities. We introduce KogniChef, a cognitive cooking assistive system that provides users with interactive, multi-modal and intuitive assistance while preparing a meal. Our system augments common kitchen appliances with a wide variety of sensors and user-interfaces, interconnected internally to infer the current state in the cooking process and to provide smart guidance. Our vision is to endow the system with the processing and the reasoning skills needed to guide a cook through recipes, similar to the assistance an expert chef would be able to provide on-site.