Felix Endres
University of Freiburg
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Felix Endres.
intelligent robots and systems | 2012
Jürgen Sturm; Nikolas Engelhard; Felix Endres; Wolfram Burgard; Daniel Cremers
In this paper, we present a novel benchmark for the evaluation of RGB-D SLAM systems. We recorded a large set of image sequences from a Microsoft Kinect with highly accurate and time-synchronized ground truth camera poses from a motion capture system. The sequences contain both the color and depth images in full sensor resolution (640 × 480) at video frame rate (30 Hz). The ground-truth trajectory was obtained from a motion-capture system with eight high-speed tracking cameras (100 Hz). The dataset consists of 39 sequences that were recorded in an office environment and an industrial hall. The dataset covers a large variety of scenes and camera motions. We provide sequences for debugging with slow motions as well as longer trajectories with and without loop closures. Most sequences were recorded from a handheld Kinect with unconstrained 6-DOF motions but we also provide sequences from a Kinect mounted on a Pioneer 3 robot that was manually navigated through a cluttered indoor environment. To stimulate the comparison of different approaches, we provide automatic evaluation tools both for the evaluation of drift of visual odometry systems and the global pose error of SLAM systems. The benchmark website [1] contains all data, detailed descriptions of the scenes, specifications of the data formats, sample code, and evaluation tools.
international conference on robotics and automation | 2012
Felix Endres; Jürgen Hess; Nikolas Engelhard; Jürgen Sturm; Daniel Cremers; Wolfram Burgard
We present an approach to simultaneous localization and mapping (SLAM) for RGB-D cameras like the Microsoft Kinect. Our system concurrently estimates the trajectory of a hand-held Kinect and generates a dense 3D model of the environment. We present the key features of our approach and evaluate its performance thoroughly on a recently published dataset, including a large set of sequences of different scenes with varying camera speeds and illumination conditions. In particular, we evaluate the accuracy, robustness, and processing time for three different feature descriptors (SIFT, SURF, and ORB). The experiments demonstrate that our system can robustly deal with difficult data in common indoor scenarios while being fast enough for online operation. Our system is fully available as open-source.
IEEE Transactions on Robotics | 2014
Felix Endres; Jürgen Hess; Jürgen Sturm; Daniel Cremers; Wolfram Burgard
In this paper, we present a novel mapping system that robustly generates highly accurate 3-D maps using an RGB-D camera. Our approach requires no further sensors or odometry. With the availability of low-cost and light-weight RGB-D sensors such as the Microsoft Kinect, our approach applies to small domestic robots such as vacuum cleaners, as well as flying robots such as quadrocopters. Furthermore, our system can also be used for free-hand reconstruction of detailed 3-D models. In addition to the system itself, we present a thorough experimental evaluation on a publicly available benchmark dataset. We analyze and discuss the influence of several parameters such as the choice of the feature descriptor, the number of visual features, and validation methods. The results of the experiments demonstrate that our system can robustly deal with challenging scenarios such as fast camera motions and feature-poor environments while being fast enough for online operation. Our system is fully available as open source and has already been widely adopted by the robotics community.
robotics: science and systems | 2009
Felix Endres; Christian Plagemann; Cyrill Stachniss; Wolfram Burgard
Truly versatile robots operating in the real world have to be able to learn about objects and their properties autonomously, that is, without being provided with carefully engineered training data. This paper presents an approach that allows a robot to discover object classes in three-dimensional range data in an unsupervised fashion and without a-priori knowledge about the observed objects. Our approach builds on Latent Dirichlet Allocation (LDA), a recently proposed probabilistic method for discovering topics in text documents. We discuss feature extraction, hypothesis generation, and statistical modeling of objects in 3D range data as well as the novel application of LDA to this domain. Our approach has been implemented and evaluated on real data of complex objects. Practical experiments demonstrate, that our approach is able to learn object class models autonomously that are consistent with the true classifications provided by a human. It furthermore outperforms unsupervised method such as hierarchical clustering that operate on a distance metric.
international conference on robotics and automation | 2008
Christian Plagemann; Felix Endres; Juergen Michael Hess; Cyrill Stachniss; Wolfram Burgard
Mobile robots rely on the ability to sense the geometry of their local environment in order to avoid obstacles or to explore the surroundings. For this task, dedicated proximity sensors such as laser range finders or sonars are typically employed. Cameras are a cheap and lightweight alternative to such sensors, but do not directly offer proximity information. In this paper, we present a novel approach to learning the relationship between range measurements and visual features extracted from a single monocular camera image. As the learning engine, we apply Gaussian processes, a non-parametric learning technique that not only yields the most likely range prediction corresponding to a certain visual input but also the predictive uncertainty. This information, in turn, can be utilized in an extended grid-based mapping scheme to more accurately update the map. In practical experiments carried out in different environments with a mobile robot equipped with an omnidirectional camera system, we demonstrate that our system is able to produce proximity estimates with an accuracy comparable to that of dedicated sensors such as sonars or infrared range finders.
international conference on robotics and automation | 2014
Seigo Ito; Felix Endres; Markus Kuderer; Gian Diego Tipaldi; Cyrill Stachniss; Wolfram Burgard
Localization approaches typically rely on an already available map to identify the position of the sensor in the environment. Such maps are usually built beforehand and often require the user to record data from the same sensor used for localization. In this paper, we relax this assumption and present a localization approach based on architectural floor plans. In general, floor plans are readily available for most man-made buildings but only represent basic architectural structures. The incomplete knowledge leads to ambiguous pose estimates. To solve this problem, we present W-RGB-D, a new method for indoor global localization based on WiFi and an RGB-D camera. We introduce a sensor model for RGB-D cameras that is suitable to be used with abstract floor plans. To resolve ambiguities during global localization, we estimate a coarse initial distribution about the sensor position using the WiFi signal strength. We evaluate our W-RGB-D localization method in indoor environments and compare its performance with RGB-D-based Monte Carlo localization. Our results demonstrate that the use of WiFi information as proposed with our approach improves the localization in terms of convergence speed and quality of the solution.
Robotics and Autonomous Systems | 2010
Christian Plagemann; Cyrill Stachniss; Jürgen Hess; Felix Endres; Nathan Franklin
We present a novel approach to estimating depth from single omnidirectional camera images by learning the relationship between visual features and range measurements available during a training phase. Our model not only yields the most likely distance to obstacles in all directions, but also the predictive uncertainties for these estimates. This information can be utilized by a mobile robot to build an occupancy grid map of the environment or to avoid obstacles during exploration-tasks that typically require dedicated proximity sensors such as laser range finders or sonars. We show in this paper how an omnidirectional camera can be used as an alternative to such range sensors. As the learning engine, we apply Gaussian processes, a nonparametric approach to function regression, as well as a recently developed extension for dealing with input-dependent noise. In practical experiments carried out in different indoor environments with a mobile robot equipped with an omnidirectional camera system, we demonstrate that our system is able to estimate range with an accuracy comparable to that of dedicated sensors based on sonar or infrared light.
intelligent robots and systems | 2013
Felix Endres; Jeffrey C. Trinkle; Wolfram Burgard
Opening doors is a fundamental skill for mobile robots operating in human environments. In this paper we present an approach to learn a dynamic model of a door from sensor observations and utilize it for effectively swinging the door open to a desired angle. The learned model enables the realization of dynamic door-opening strategies and reduces the complexity of the door opening task. For example, the robot does not need to maintain a grasp of the handle, which would form a closed kinematic chain. Accordingly, it reduces the degrees of freedom required of the manipulator and facilitates motion planning. Additionally, execution is faster, because the robot merely needs to push the door long enough to achieve the right combination of position and speed such that the door stops at the desired state. Our approach applies Gaussian process regression to learn the deceleration of the door with respect to position and velocity of the door. This model of the dynamics can be easily learned from observing a human teacher or by interactive experimentation.
intelligent robots and systems | 2014
Felix Endres; Christoph Sprunk; Rainer Kümmerle; Wolfram Burgard
The typically restricted field of view of visual sensors often imposes limitations on the performance of localization and simultaneous localization and mapping (SLAM) approaches. In this paper, we propose and analyze the combination of an RGB-D camera with two planar mirrors to split the field of view such that it covers both front and rear view of a mobile robot. We describe how to estimate the extrinsic calibration parameters of the modified sensor using a standard parametrization and a reduced one that exploits the properties of the setup. Our experimental evaluation on real-world data demonstrates the robustness of the calibration procedure. Additionally, we show that our proposed sensor modification substantially improves the accuracy and the robustness in a simultaneous localization and mapping task.
Automatisierungstechnik | 2012
Felix Endres; Jürgen Hess; Nikolas Engelhard; Jürgen Sturm; Wolfram Burgard
Zusammenfassung Zur Automatisierung komplexer Manipulationsaufgaben in dynamischen oder unbekannten Umgebungen benötigt die Steuerungssoftware eines autonomen Roboters eine Repräsentation des Arbeitsbereiches, mit der die Kollisionsfreiheit bei der Durchführung der Aufgabe gewährleistet werden kann. Dieser Beitrag beschreibt ein neues System zur Erstellung von 3D-Umgebungsrepräsentationen aus den RGB-D-Daten neuartiger Kameras, wie der Microsoft Kinect. Durch die Unabhängigkeit von weiterer Sensorik ist der Ansatz insbesondere zur Ergänzung von rein bildbasierten Regelungssystemen geeignet. Abstract This paper presents an approach to 6-DOF simultaneous localization and mapping (SLAM) particularly suited for collision avoidance in visually guided robotic manipulation tasks in dynamic or unknown environments. We exploit the properties of novel RGB-D sensors such as the Microsoft Kinect to build highly accurate voxel maps.