Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bastian Steder is active.

Publication


Featured researches published by Bastian Steder.


international conference on robotics and automation | 2011

Point feature extraction on 3D range scans taking into account object boundaries

Bastian Steder; Radu Bogdan Rusu; Kurt Konolige; Wolfram Burgard

In this paper we address the topic of feature extraction in 3D point cloud data for object recognition and pose identification. We present a novel interest keypoint extraction method that operates on range images generated from arbitrary 3D point clouds, which explicitly considers the borders of the objects identified by transitions from foreground to background. We furthermore present a feature descriptor that takes the same information into account. We have implemented our approach and present rigorous experiments in which we analyze the individual components with respect to their repeatability and matching capabilities and evaluate the usefulness for point feature based object detection methods.


Autonomous Robots | 2009

On measuring the accuracy of SLAM algorithms

Rainer Kümmerle; Bastian Steder; Christian Dornhege; Michael Ruhnke; Giorgio Grisetti; Cyrill Stachniss; Alexander Kleiner

In this paper, we address the problem of creating an objective benchmark for evaluating SLAM approaches. We propose a framework for analyzing the results of a SLAM approach based on a metric for measuring the error of the corrected trajectory. This metric uses only relative relations between poses and does not rely on a global reference frame. This overcomes serious shortcomings of approaches using a global reference frame to compute the error. Our method furthermore allows us to compare SLAM approaches that use different estimation techniques or different sensor modalities since all computations are made based on the corrected trajectory of the robot.We provide sets of relative relations needed to compute our metric for an extensive set of datasets frequently used in the robotics community. The relations have been obtained by manually matching laser-range observations to avoid the errors caused by matching algorithms. Our benchmark framework allows the user to easily analyze and objectively compare different SLAM approaches.


intelligent robots and systems | 2009

A comparison of SLAM algorithms based on a graph of relations

Wolfram Burgard; Cyrill Stachniss; Giorgio Grisetti; Bastian Steder; Rainer Kümmerle; Christian Dornhege; Michael Ruhnke; Alexander Kleiner; Juan D. Tardós

In this paper, we address the problem of creating an objective benchmark for comparing SLAM approaches. We propose a framework for analyzing the results of SLAM approaches based on a metric for measuring the error of the corrected trajectory. The metric uses only relative relations between poses and does not rely on a global reference frame. The idea is related to graph-based SLAM approaches in the sense that it considers the energy needed to deform the trajectory estimated by a SLAM approach to the ground truth trajectory. Our method enables us to compare SLAM approaches that use different estimation techniques or different sensor modalities since all computations are made based on the corrected trajectory of the robot. We provide sets of relative relations needed to compute our metric for an extensive set of datasets frequently used in the SLAM community. The relations have been obtained by manually matching laser-range observations. We believe that our benchmarking framework allows the user an easy analysis and objective comparisons between different SLAM approaches.


IEEE Transactions on Robotics | 2008

Visual SLAM for Flying Vehicles

Bastian Steder; Giorgio Grisetti; Cyrill Stachniss; Wolfram Burgard

The ability to learn a map of the environment is important for numerous types of robotic vehicles. In this paper, we address the problem of learning a visual map of the ground using flying vehicles. We assume that the vehicles are equipped with one or two low-cost downlooking cameras in combination with an attitude sensor. Our approach is able to construct a visual map that can later on be used for navigation. Key advantages of our approach are that it is comparably easy to implement, can robustly deal with noisy camera images, and can operate either with a monocular camera or a stereo camera system. Our technique uses visual features and estimates the correspondences between features using a variant of the progressive sample consensus (PROSAC) algorithm. This allows our approach to extract spatial constraints between camera poses that can then be used to address the simultaneous localization and mapping (SLAM) problem by applying graph methods. Furthermore, we address the problem of efficiently identifying loop closures. We performed several experiments with flying vehicles that demonstrate that our method is able to construct maps of large outdoor and indoor environments.


international conference on robotics and automation | 2010

Robust place recognition for 3D range data based on point features

Bastian Steder; Giorgio Grisetti; Wolfram Burgard

The problem of place recognition appears in different mobile robot navigation problems including localization, SLAM, or change detection in dynamic environments. Whereas this problem has been studied intensively in the context of robot vision, relatively few approaches are available for three-dimensional range data. In this paper, we present a novel and robust method for place recognition based on range images. Our algorithm matches a given 3D scan against a database using point features and scores potential transformations by comparing significant points in the scans. A further advantage of our approach is that the features allow for a computation of the relative transformations between scans which is relevant for registration processes. Our approach has been implemented and tested on different 3D data sets obtained outdoors. In several experiments we demonstrate the advantages of our approach also in comparison to existing techniques.


intelligent robots and systems | 2009

Robust on-line model-based object detection from range images

Bastian Steder; Giorgio Grisetti; Mark Van Loock; Wolfram Burgard

A mobile robot that accomplishes high level tasks needs to be able to classify the objects in the environment and to determine their location. In this paper, we address the problem of online object detection in 3D laser range data. The object classes are represented by 3D point-clouds that can be obtained from a set of range scans. Our method relies on the extraction of point features from range images that are computed from the point-clouds. Compared to techniques that directly operate on a full 3D representation of the environment, our approach requires less computation time while retaining the robustness of full 3D matching. Experiments demonstrate that the proposed approach is even able to deal with partially occluded scenes and to fulfill the runtime requirements of online applications.


intelligent robots and systems | 2007

Learning maps in 3D using attitude and noisy vision sensors

Bastian Steder; Giorgio Grisetti; Slawomir Grzonka; Cyrill Stachniss; Axel Rottmann; Wolfram Burgard

In this paper, we address the problem of learning 3D maps of the environment using a cheap sensor setup which consists of two standard web cams and a low cost inertial measurement unit. This setup is designed for lightweight or flying robots. Our technique uses visual features extracted from the web cams and estimates the 3D location of the landmarks via stereo vision. Feature correspondences are estimated using a variant of the PROSAC algorithm. Our mapping technique constructs a graph of spatial constraints and applies an efficient gradient descent-based optimization approach to estimate the most likely map of the environment. Our approach has been evaluated in comparably large outdoor and indoor environments. We furthermore present experiments in which our technique is applied to build a map with a blimp.


international conference on robotics and automation | 2013

A navigation system for robots operating in crowded urban environments

Rainer Kümmerle; Michael Ruhnke; Bastian Steder; Cyrill Stachniss; Wolfram Burgard

Over the past years, there has been a tremendous progress in the area of robot navigation. Most of the systems developed thus far, however, are restricted to indoor scenarios, non-urban outdoor environments, or road usage with cars. Urban areas introduce numerous challenges to autonomous mobile robots as they are highly complex and in addition to that dynamic. In this paper, we present a navigation system for pedestrian-like autonomous navigation with mobile robots in city environments. We describe different components including a SLAM system for dealing with huge maps of city centers, a planning approach for inferring feasible paths taking also into account the traversability and type of terrain, and a method for accurate localization in dynamic environments. The navigation system has been implemented and tested in several large-scale field tests in which the robot Obelix managed to autonomously navigate from our university campus over a 3.3 km long route to the city center of Freiburg.


intelligent robots and systems | 2011

Place recognition in 3D scans using a combination of bag of words and point feature based relative pose estimation

Bastian Steder; Michael Ruhnke; Slawomir Grzonka; Wolfram Burgard

Place recognition, i.e., the ability to recognize previously seen parts of the environment, is one of the fundamental tasks in mobile robotics. The wide range of applications of place recognition includes localization (determine the initial pose), SLAM (detect loop closures), and change detection in dynamic environments. In the past, only relatively little work has been carried out to attack this problem using 3D range data and the majority of approaches focuses on detecting similar structures without estimating relative poses. In this paper, we present an algorithm based on 3D range data that is able to reliably detect previously seen parts of the environment and at the same time calculates an accurate transformation between the corresponding scan-pairs. Our system uses the estimated transformation to evaluate a candidate and in this way to more robustly reject false positives for place recognition. We present an extensive set of experiments using publicly available datasets in which we compare our system to other state-of-the-art approaches.


international conference on robotics and automation | 2009

Unsupervised learning of 3D object models from partial views

Michael Ruhnke; Bastian Steder; Giorgio Grisetti; Wolfram Burgard

We present an algorithm for learning 3D object models from partial object observations. The input to our algorithm is a sequence of 3D laser range scans. Models learned from the objects are represented as point clouds. Our approach can deal with partial views and it can robustly learn accurate models from complex scenes. It is based on an iterative matching procedure which attempts to recursively merge similar models. The alignment between models is determined using a novel scan registration procedure based on range images. The decision about which models to merge is performed by spectral clustering of a similarity matrix whose entries represent the consistency between different models.

Collaboration


Dive into the Bastian Steder's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Giorgio Grisetti

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge