Kai M. Wurm
University of Freiburg
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kai M. Wurm.
intelligent robots and systems | 2008
Kai M. Wurm; Cyrill Stachniss; Wolfram Burgard
This paper addresses the problem of exploring an unknown environment with a team of mobile robots. The key issue in coordinated multi-robot exploration is how to assign target locations to the individual robots such that the overall mission time is minimized. In this paper, we propose a novel approach to distribute the robots over the environment that takes into account the structure of the environment. To achieve this, it partitions the space into segments, for example, corresponding to individual rooms. Instead of only considering frontiers between unknown and explored areas as target locations, we send the robots to the individual segments with the task to explore the corresponding area. Our approach has been implemented and tested in simulation as well as in real world experiments. The experiments demonstrate that the overall exploration time can be significantly reduced by considering our segmentation method.
intelligent robots and systems | 2010
Armin Hornung; Kai M. Wurm
In this paper, we present a localization method for humanoid robots navigating in arbitrary complex indoor environments using only onboard sensing. Reliable and accurate localization for humanoid robots operating in such environments is a challenging task. First, humanoids typically execute motion commands rather inaccurately and odometry can be estimated only very roughly. Second, the observations of the small and lightweight sensors of most humanoids are seriously affected by noise. Third, since most humanoids walk with a swaying motion and can freely move in the environment, e.g., they are not forced to walk on flat ground only, a 6D torso pose has to be estimated. We apply Monte Carlo localization to globally determine and track a humanoids 6D pose in a 3D world model, which may contain multiple levels connected by staircases. To achieve a robust localization while walking and climbing stairs, we intergrate 2D laser range measurements as well as attitude data and information from the joint encoders. We present simulated as well as real-word experiments with our humanoid and thoroughly evaluate our approach. As the experiments illustrate, the robot is able to globally localize itself and accurately track its 6D pose over time.
intelligent robots and systems | 2009
Kai M. Wurm; Rainer Kümmerle; Cyrill Stachniss; Wolfram Burgard
This paper addresses the problem of vegetation detection from laser measurements. The ability to detect vegetation is important for robots operating outdoors, since it enables a robot to navigate more efficiently and safely in such environments. In this paper, we propose a novel approach for detecting low, grass-like vegetation using laser remission values. In our algorithm, the laser remission is modeled as a function of distance, incidence angle, and material. We classify surface terrain based on 3D scans of the surroundings of the robot. The model is learned in a self-supervised way using vibration-based terrain classification. In all real world experiments we carried out, our approach yields a classification accuracy of over 99%. We furthermore illustrate how the learned classifier can improve the autonomous navigation capabilities of mobile robots.
international conference on robotics and automation | 2012
Alexander Cunningham; Kai M. Wurm; Wolfram Burgard; Frank Dellaert
In this paper we focus on the multi-robot perception problem, and present an experimentally validated end-to-end multi-robot mapping framework, enabling individual robots in a team to see beyond their individual sensor horizons. The inference part of our system is the DDF-SAM algorithm [1], which provides a decentralized communication and inference scheme, but did not address the crucial issue of data association. One key contribution is a novel, RANSAC-based, approach for performing the between-robot data associations and initialization of relative frames of reference. We demonstrate this system with both data collected from real robot experiments, as well as in a large scale simulated experiment demonstrating the scalability of the proposed approach.
Robotics and Autonomous Systems | 2010
Kai M. Wurm; Cyrill Stachniss; Giorgio Grisetti
One important design decision for the development of autonomously navigating mobile robots is the choice of the representation of the environment. This includes the question of which type of features should be used, or whether a dense representation such as occupancy grid maps is more appropriate. In this paper, we present an approach which performs SLAM using multiple representations of the environment simultaneously. It uses reinforcement to learn when to switch to an alternative representation method depending on the current observation. This allows the robot to update its pose and map estimate based on the representation that models the surrounding of the robot in the best way. The approach has been implemented on a real robot and evaluated in scenarios, in which a robot has to navigate in- and outdoors and therefore switches between a landmark-based representation and a dense grid map. In practical experiments, we demonstrate that our approach allows a robot to robustly map environments which cannot be adequately modeled by either of the individual representations.
Robotics and Autonomous Systems | 2014
Kai M. Wurm; Henrik Kretzschmar; Rainer Kümmerle; Cyrill Stachniss; Wolfram Burgard
The ability to reliably detect vegetation is an important requirement for outdoor navigation with mobile robots as it enables the robot to navigate more efficiently and safely. In this paper, we present an approach to detect flat vegetation, such as grass, which cannot be identified using range measurements. This type of vegetation is typically found in structured outdoor environments such as parks or campus sites. Our approach classifies the terrain in the vicinity of the robot based on laser scans and makes use of the fact that plants exhibit specific reflection properties. It uses a support vector machine to learn a classifier for distinguishing vegetation from streets based on laser reflectivity, measured distance, and the incidence angle. In addition, it employs a vibration-based classifier to acquire training data in a self-supervised way and thus reduces manual work. Our approach has been evaluated extensively in real world experiments using several mobile robots. We furthermore evaluated it with different types of sensors and in the context of mapping, autonomous navigation, and exploration experiments. In addition, we compared it to an approach based on linear discriminant analysis. In our real world experiments, our approach yields a classification accuracy close to 100%.
intelligent robots and systems | 2011
Kai M. Wurm; Daniel Hennes; Dirk Holz; Radu Bogdan Rusu; Cyrill Stachniss; Kurt Konolige; Wolfram Burgard
In this paper, we present a novel multi-resolution approach to efficiently mapping 3D environments. Our representation models the environment as a hierarchy of probabilistic 3D maps, in which each submap is updated and transformed individually. In addition to the formal description of the approach, we present an implementation for tabletop manipulation tasks and an information-driven exploration algorithm for autonomously building a hierarchical map from sensor data. We evaluate our approach using real-world as well as simulated data. The results demonstrate that our method is able to efficiently represent 3D environments at high levels of detail. Compared to a monolithic approach, our maps can be generated significantly faster while requiring significantly less memory.
international conference on robotics and automation | 2010
Michael Karg; Kai M. Wurm; Cyrill Stachniss; Klaus Dietmayer; Wolfram Burgard
In the past, there has been a tremendous advance in the area of simultaneous localization and mapping (SLAM). However, there are relatively few approaches for incorporating prior information or knowledge about structural similarities into the mapping process. Consider, for example, office buildings in which most of the offices have an identical geometric layout. The same typically holds for the individual stories of buildings. In this paper, we propose an approach for generating alignment constraints between different floors of the same building in the context of graph-based SLAM. This is done under the assumption that the individual floors of a building share at least some structural properties. To identify such areas, we apply a particle filter-based localization approach using maps and observations from different floors. We evaluate our system using several real datasets as well as in simulation. The results demonstrate that our approach is able to correctly align multiple floors and allows the robot to generate consistent models of multi-story buildings.
intelligent robots and systems | 2010
Kai M. Wurm; Christian Dornhege; Patrick Eyerich; Cyrill Stachniss; Bernhard Nebel; Wolfram Burgard
The problem of autonomously exploring an environment with a team of robots received considerable attention in the past. However, there are relatively few approaches to coordinate teams of robots that are able to deploy and retrieve other robots. Efficiently coordinating the exploration with such marsupial robots requires advanced planning mechanisms that are able to consider symbolic deployment and retrieval actions. In this paper, we propose a novel approach for coordinating the exploration with marsupial robot teams. Our method integrates a temporal symbolic planner that explicitly considers deployment and retrieval actions with a traditional cost-based assignment procedure. Our approach has been implemented and evaluated in several simulated environments and with varying team sizes. The results demonstrate that our proposed method is able to coordinate marsupial teams of robots to efficiently explore unknown environments.
Autonomous Robots | 2013
Kai M. Wurm; Christian Dornhege; Bernhard Nebel; Wolfram Burgard; Cyrill Stachniss
The efficient coordination of a team of heterogeneous robots is an important requirement for exploration, rescue, and disaster recovery missions. In this paper, we present a novel approach to target assignment for heterogeneous teams of robots. It goes beyond existing target assignment algorithms in that it explicitly takes symbolic actions into account. Such actions include the deployment and retrieval of other robots or manipulation tasks. Our method integrates a temporal planning approach with a traditional cost-based planner. The proposed approach was implemented and evaluated in two distinct settings. First, we coordinated teams of marsupial robots. Such robots are able to deploy and pickup smaller robots. Second, we simulated a disaster scenario where the task is to clear blockades and reach certain critical locations in the environment. A similar setting was also investigated using a team of real robots. The results show that our approach outperforms ad-hoc extensions of state-of-the-art cost-based coordination methods and that the approach is able to efficiently coordinate teams of heterogeneous robots and to consider symbolic actions.