Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David Schiebener is active.

Publication


Featured researches published by David Schiebener.


international conference on robotics and automation | 2010

Autonomous acquisition of visual multi-view object representations for object recognition on a humanoid robot

Kai Welke; Jan Issac; David Schiebener; Tamim Asfour; R. Dillmann

The autonomous acquisition of object representations which allow recognition, localization and grasping of objects in the environment is a challenging task, which has shown to be difficult. In this paper, we present a systems for autonomous acquisition of visual object representations, which endows a humanoid robot with the ability to enrich its internal object representation and allows the realization of complex visual tasks. More precisely, we present techniques for segmentation and modeling of objects held in the five-fingered robot hand. Multiple object views are generated by rotating the held objects in the robots field of view. The acquired object representations are evaluated in the context of visual search and object recognition tasks in cluttered environments. Experimental results show successful implementation of the complete cycle from object exploration to object recognition on a humanoid robot.


ieee-ras international conference on humanoid robots | 2011

Segmentation and learning of unknown objects through physical interaction

David Schiebener; Ales Ude; Jun Morimoto; Tamim Asfour; Rüdiger Dillmann

This paper reports on a new approach for segmentation and learning of new, unknown objects with a humanoid robot. No prior knowledge about the objects or the environment is needed. The only necessary assumptions are firstly, that the object has a (partly) smooth surface that contains some distinctive visual features and secondly, that the object moves as a rigid body. The robot uses both its visual and manipulative capabilities to segment and learn unknown objects in unknown environments. The segmentation algorithm is based on pushing hypothetical objects by the robot, which provides a sufficient amount of information to distinguish the object from the background. In the case of a successful segmentation, additional features are associated with the object over several pushing-and-verification iterations. The accumulated features are used to learn the appearance of the object from multiple viewing directions. We show that the learned model, in combination with the proposed segmentation process, allows robust object recognition in cluttered scenes.


Adaptive Behavior | 2013

Integrating visual perception and manipulation for autonomous learning of object representations

David Schiebener; Jun Morimoto; Tamim Asfour; Ales Ude

Humans can effortlessly perceive an object they encounter for the first time in a possibly cluttered scene and memorize its appearance for later recognition. Such performance is still difficult to achieve with artificial vision systems because it is not clear how to define the concept of objectness in its full generality. In this paper we propose a paradigm that integrates the robot’s manipulation and sensing capabilities to detect a new, previously unknown object and learn its visual appearance. By making use of the robot’s manipulation capabilities and force sensing, we introduce additional information that can be utilized to reliably separate unknown objects from the background. Once an object has been identified, the robot can continuously manipulate it to accumulate more information about it and learn its complete visual appearance. We demonstrate the feasibility of the proposed approach by applying it to the problem of autonomous learning of visual representations for viewpoint-independent object recognition on a humanoid robot.


international conference on robotics and automation | 2014

Physical interaction for segmentation of unknown textured and non-textured rigid objects

David Schiebener; Ales Ude; Tamim Asfour

We present an approach for autonomous interactive object segmentation by a humanoid robot. The visual segmentation of unknown objects in a complex scene is an important prerequisite for e.g. object learning or grasping, but extremely difficult to achieve through passive observation only. Our approach uses the manipulative capabilities of humanoid robots to induce motion on the object and thus integrates the robots manipulation and sensing capabilities to segment previously unknown objects. We show that this is possible without any human guidance or pre-programmed knowledge, and that the resulting motion allows for reliable and complete segmentation of new objects in an unknown and cluttered environment. We extend our previous work, which was restricted to textured objects, by devising new methods for the generation of object hypotheses and the estimation of their motion after being pushed by the robot. These methods are mainly based on the analysis of motion of color annotated 3D points obtained from stereo vision, and allow the segmentation of textured as well as non-textured rigid objects. In order to evaluate the quality of the obtained segmentations, they are used to train a simple object recognizer. The approach has been implemented and tested on the humanoid robot ARMAR-III, and the experimental results confirm its applicability on a wide variety of objects even in highly cluttered scenes.


ieee-ras international conference on humanoid robots | 2012

Discovery, segmentation and reactive grasping of unknown objects

David Schiebener; Julian Schill; Tamim Asfour

Learning the visual appearance and physical properties of unknown objects is an important capability for humanoid robots that are supposed to be working in an open environment. We present an approach that enables a robot to discover new, unknown objects, segment them from the background and grasp them. This gives the robot full control over the object and allows its further multimodal exploration.


international conference on robotics and automation | 2013

Gaze selection during manipulation tasks

Kai Welke; David Schiebener; Tamim Asfour; Rüdiger Dillmann

A major strength of humanoid robotics platforms consists in their potential to perform a wide range of manipulation tasks in human-centered environments thanks to their anthropomorphic design. Further, they offer active head-eye systems which allow to extend the observable workspace by employing active gaze control. In this work, we address the question where to look during manipulation tasks while exploiting these two key capabilities of humanoid robots. We present a solution to the gaze selection problem, which takes into account constraints derived from manipulation tasks. Thereby, three different subproblems are addressed: the representation of the acquired visual input, the calculation of saliency based on this representation, and the selection of the most suitable gaze direction. As representation of the visual input, a probabilistic environmental model is discussed, which allows to take into account the dynamic nature of manipulation tasks. At the core of the gaze selection mechanism, a novel saliency measure is proposed that includes accuracy requirements from the manipulation task in the saliency calculation. Finally, an iterative procedure based on spherical graphs is developed in order to decide for the best gaze direction. The feasibility of the approach is experimentally evaluated in the context of bimanual manipulation tasks on the humanoid robot ARMAR-III.


international conference on robotics and automation | 2012

Integrating surface-based hypotheses and manipulation for autonomous segmentation and learning of object representations

Ales Ude; David Schiebener; Norikazu Sugimoto; Jun Morimoto

Learning about new objects that a robot sees for the first time is a difficult problem because it is not clear how to define the concept of object in general terms. In this paper we consider as objects those physical entities that are comprised of features which move consistently when the robot acts upon them. Among the possible actions that a robot could apply to a hypothetical object, pushing seems to be the most suitable one due to its relative simplicity and general applicability. We propose a methodology to generate and apply pushing actions to hypothetical objects. A probing push causes visual features to move, which enables the robot to either confirm or reject the initial hypothesis about existence of the object. Furthermore, the robot can discriminate the object from the background and accumulate visual features that are useful for training of state of the art statistical classifiers such as bag of features.


international conference on advanced robotics | 2015

Transferring object grasping knowledge and skill across different robotic platforms

Ali Paikan; David Schiebener; Mirko Wächter; Tamim Asfour; Giorgio Metta; Lorenzo Natale

This study describes the transfer of object grasping skills between two different humanoid robots with different software frameworks. We realize such a knowledge and skill transfer between the humanoid robots iCub and ARMAR-III. These two robots have different kinematics and are programmed using different middlewares, YARP and ArmarX. We developed a bridge system to allow for the execution of grasping skills of ARMAR-III on iCub. As the embodiment differs, the known feasible grasps for the one robot are not always feasible for the other robot. We propose a reactive correction behavior to detect failure of a grasp during its execution, to correct it until it is successful, and thus adapt the known grasp definition to the new embodiment.


ieee-ras international conference on humanoid robots | 2016

Workspace analysis for planning human-robot interaction tasks

Nikolaus Vahrenkamp; Harry Arnst; Mirko Wächter; David Schiebener; Panagiotis Sotiropoulos; Michal Kowalik; Tamim Asfour

We present an approach for determining suitable locations for human-robot interaction tasks. Therefore, we introduce the task specific Interaction Workspace as a representation of the workspace that can be accessed by both agents, i.e. the robot and the human. We show how the Interaction Workspace can be efficiently determined for a specific situation by making use of precomputed workspace representations of robot and human. By considering several quality measures related to dexterity and comfort, the Interaction Workspace provides valuable information about potential targets for human robot interaction (e.g. for object handover tasks). We evaluate the online performance of building appropriate data structures and show how the approach can be applied in a realistic hand-over use case with the humanoid robot ARMAR-III.


intelligent robots and systems | 2016

Heuristic 3D object shape completion based on symmetry and scene context

David Schiebener; Andreas Schmidt; Nikolaus Vahrenkamp; Tamim Asfour

Object shape information is essential for robot manipulation tasks, in particular for grasp planning and collision-free motion planning. But in general a complete object model is not available, in particular when dealing with unknown objects. We propose a method for completing shapes that are only partially known, which is a common situation when a robot perceives a new object only from one direction. Our approach is based on the assumption that most objects used in service robotic setups have symmetries. We determine and rate symmetry plane candidates to estimate the hidden parts of the object. By finding possible supporting planes based on its immediate neighborhood, the search space for symmetry planes is restricted, and the bottom part of the object is added. Gaps along the sides in the direction of the view axis are closed by linear interpolation. We evaluate our approach with real-world experiments using the YCB object and model set [1].

Collaboration


Dive into the David Schiebener's collaboration.

Top Co-Authors

Avatar

Tamim Asfour

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Nikolaus Vahrenkamp

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ales Ude

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Kai Welke

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Rüdiger Dillmann

Center for Information Technology

View shared research outputs
Top Co-Authors

Avatar

Julian Schill

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jun Morimoto

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Markus Przybylski

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Mirko Wächter

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Andreas Schmidt

Karlsruhe University of Applied Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge