Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David Droeschel is active.

Publication


Featured researches published by David Droeschel.


intelligent robots and systems | 2009

Robust 3D-mapping with time-of-flight cameras

Stefan May; David Droeschel; Stefan Fuchs; Dirk Holz; Andreas Nüchter

Time-of-flight cameras constitute a smart and fast technology for 3D perception but lack in measurement precision and robustness. The authors present a comprehensive approach for 3D environment mapping based on this technology. Imprecision of depth measurements are properly handled by calibration and application of several filters. Robust registration is performed by a novel extension to the Iterative Closest Point algorithm. Remaining registration errors are reduced by global relaxation after loop-closure and surface smoothing. A laboratory ground truth evaluation is provided as well as 3D mapping experiments in a larger indoor environment.


human-robot interaction | 2011

Learning to interpret pointing gestures with a time-of-flight camera

David Droeschel; Jörg Stückler; Sven Behnke

Pointing gestures are a common and intuitive way to draw somebodys attention to a certain object. While humans can easily interpret robot gestures, the perception of human behavior using robot sensors is more difficult. In this work, we propose a method for perceiving pointing gestures using a Time-of-Flight (ToF) camera. To determine the intended pointing target, frequently the line between a persons eyes and hand is assumed to be the pointing direction. However, since people tend to keep the line-of-sight free while they are pointing, this simple approximation is inadequate. Moreover, depending on the distance and angle to the pointing target, the line between shoulder and hand or elbow and hand may yield better interpretations of the pointing direction. In order to achieve a better estimate, we extract a set of body features from depth and amplitude images of a ToF camera and train a model of pointing directions using Gaussian Process Regression. We evaluate the accuracy of the estimated pointing direction in a quantitative study. The results show that our learned model achieves far better accuracy than simple criteria like head-hand, shoulder-hand, or elbow-hand line.


international conference on robotics and automation | 2013

Mobile bin picking with an anthropomorphic service robot

Matthias Nieuwenhuisen; David Droeschel; Dirk Holz; Jörg Stückler; Alexander Berner; Jun Li; Reinhard Klein; Sven Behnke

Grasping individual objects from an unordered pile in a box has been investigated in static scenarios so far. In this paper, we demonstrate bin picking with an anthropomorphic mobile robot. To this end, we extend global navigation techniques by precise local alignment with a transport box. Objects are detected in range images using a shape primitive-based approach. Our approach learns object models from single scans and employs active perception to cope with severe occlusions. Grasps and arm motions are planned in an efficient local multiresolution height map. All components are integrated and evaluated in a bin picking and part delivery task.


international conference on robotics and automation | 2014

Local Multi-Resolution Representation for 6D Motion Estimation and Mapping with a Continuously Rotating 3D Laser Scanner

David Droeschel; Jörg Stückler; Sven Behnke

Micro aerial vehicles (MAV) pose a challenge in designing sensory systems and algorithms due to their size and weight constraints and limited computing power. We present an efficient 3D multi-resolution map that we use to aggregate measurements from a lightweight continuously rotating laser scanner. We estimate the robots motion by means of visual odometry and scan registration, aligning consecutive 3D scans with an incrementally built map. By using local multi-resolution, we gain computational efficiency by having a high resolution in the near vicinity of the robot and a lower resolution with increasing distance from the robot, which correlates with the sensors characteristics in relative distance accuracy and measurement density. Compared to uniform grids, local multi-resolution leads to the use of fewer grid cells without loosing information and consequently results in lower computational costs. We efficiently and accurately register new 3D scans with the map in order to estimate the motion of the MAV and update the map in-flight. In experiments, we demonstrate superior accuracy and efficiency of our registration approach compared to state-of-the-art methods such as GICP. Our approach builds an accurate 3D obstacle map and estimates the vehicles trajectory in real-time.


intelligent robots and systems | 2010

Multi-frequency Phase Unwrapping for Time-of-Flight cameras

David Droeschel; Dirk Holz; Sven Behnke

Time-of-Flight (ToF) cameras gain depth information by emitting amplitude-modulated near-infrared light and measuring the phase shift between the emitted and the reflected signal. The phase shift is proportional to the objects distance modulo the wavelength of the modulation frequency. This results in a distance ambiguity. Distances larger than the wavelength are wrapped into the sensors non-ambiguity range and cause spurious distance measurements. We apply Phase Unwrapping to reconstruct these wrapped measurements. Our approach is based on a probabilistic graphical model. We use loopy belief propagation to detect and infer the position of wrapped measurements. Besides depth discontinuities, our method utilizes multiple modulation frequencies to identify wrapped measurements. In experiments, we show that wrapped measurements are identified and corrected, even in situations where the scene shows steep slopes in the depth measurements.


european conference on mobile robots | 2013

Multimodal obstacle detection and collision avoidance for micro aerial vehicles

Matthias Nieuwenhuisen; David Droeschel; Johannes Schneider; Dirk Holz; Thomas Läbe; Sven Behnke

Reliably perceiving obstacles and avoiding collisions is key for the fully autonomous application of micro aerial vehicles (MAVs), Limiting factors for increasing autonomy and complexity of MAVs are limited onboard sensing and limited onboard processing power. In this paper, we propose a complete system with a multimodal sensor setup for omnidirectional obstacle perception. We developed a lightweight 3D laser scanner and visual obstacle detection using wide-angle stereo cameras. Detected obstacles are aggregated in egocentric grid maps. We implemented a fast reactive collision avoidance approach for safe operation in the vicinity of structures like buildings or vegetation.


Journal of Field Robotics | 2016

Multilayered Mapping and Navigation for Autonomous Micro Aerial Vehicles

David Droeschel; Matthias Nieuwenhuisen; Marius Beul; Dirk Holz; Jörg Stückler; Sven Behnke

Micro aerial vehicles, such as multirotors, are particularly well suited for the autonomous monitoring, inspection, and surveillance of buildings, e.g., for maintenance or disaster management. Key prerequisites for the fully autonomous operation of micro aerial vehicles are real-time obstacle detection and planning of collision-free trajectories. In this article, we propose a complete system with a multimodal sensor setup for omnidirectional obstacle perception consisting of a three-dimensional 3D laser scanner, two stereo camera pairs, and ultrasonic distance sensors. Detected obstacles are aggregated in egocentric local multiresolution grid maps. Local maps are efficiently merged in order to simultaneously build global maps of the environment and localize in these. For autonomous navigation, we generate trajectories in a multilayered approach: from mission planning over global and local trajectory planning to reactive obstacle avoidance. We evaluate our approach and the involved components in simulation and with the real autonomous micro aerial vehicle. Finally, we present the results of a complete mission for autonomously mapping a building and its surroundings.


international conference on robotics and automation | 2011

Towards joint attention for a domestic service robot - person awareness and gesture recognition using Time-of-Flight cameras

David Droeschel; Jörg Stückler; Dirk Holz; Sven Behnke

Joint attention between a human user and a robot is essential for effective human-robot interaction. In this work, we propose an approach to person awareness and to the perception of showing and pointing gestures for a domestic service robot. In contrast to previous work, we do not require the person to be at a predefined position, but instead actively approach and orient towards the communication partner. For perceiving showing and pointing gestures and for estimating the pointing direction a Time-of-Flight camera is used. Estimated pointing directions and shown objects are matched to objects in the robots environment. Both the perception of showing and pointing gestures as well as the accurary of estimated pointing directions have been evaluated in a set of different experiments. The results show that both gestures are adequatly perceived by the robot. Furthermore, our system achieves a higher accuracy in estimating the pointing direction than is reported in the literature for a stereo-based system. In addition, the overall system has been successfully tested in two international RoboCup@Home competitions and the 2010 ICRA Mobile Manipulation Challenge


international conference on robotics and automation | 2010

Using Time-of-Flight cameras with active gaze control for 3D collision avoidance

David Droeschel; Dirk Holz; Jörg Stückler; Sven Behnke

We propose a 3D obstacle avoidance method for mobile robots. Besides the robots 2D laser range finder, a Time-of-Flight camera is used to perceive obstacles that are not in the scan plane of the laser range finder. Existing approaches that employ Time-of-Flight cameras suffer from the limited field-of-view of the sensor. To overcome this issue, we mount the camera on the head of our anthropomorphic robot Dynamaid. This allows to change the gaze direction through the robots pan-tilt neck and its torso yaw joint.


robot soccer world cup | 2011

Towards semantic scene analysis with time-of-flight cameras

Dirk Holz; Ruwen Schnabel; David Droeschel; Jörg Stückler; Sven Behnke

For planning grasps and other object manipulation actions in complex environments, 3D semantic information becomes crucial. This paper focuses on the application of recent 3D Time-of-Flight (ToF) cameras in the context of semantic scene analysis. For being able to acquire semantic information from ToF camera data, we a) pre-process the data including outlier removal, filtering and phase unwrapping for correcting erroneous distance measurements, and b) apply a randomized algorithm for detecting shapes such as planes, spheres, and cylinders. We present experimental results that show that the robustness against noise and outliers of the underlying RANSAC paradigm allows for segmenting and classifying objects in 3D ToF camera data captured in natural mobile manipulation setups.

Collaboration


Dive into the David Droeschel's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge