Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Thomas Wiemann is active.

Publication


Featured researches published by Thomas Wiemann.


intelligent robots and systems | 2013

Building semantic object maps from sparse and noisy 3D data

Martin Günther; Thomas Wiemann; Sven Albrecht; Joachim Hertzberg

We present an approach to create a semantic map of an indoor environment, based on a series of 3D point clouds captured by a mobile robot using a Kinect camera. The proposed system reconstructs the surfaces in the point clouds, detects different types of furniture and estimates their poses. The result is a consistent mesh representation of the environment enriched by CAD models corresponding to the detected pieces of furniture. We evaluate our approach on two datasets totaling over 800 frames directly on each individual frame.


international symposium on safety, security, and rescue robotics | 2010

Automatic construction of polygonal maps from point cloud data

Thomas Wiemann; Andres Nüchter; Kai Lingemann; Stefan Stiene; Joachim Hertzberg

This paper presents a novel approach to create polygonal maps from 3D point cloud data. The gained map is augmented with an interpretation of the scene. Our procedure produces accurate maps of indoor environments fast and reliably. These maps are successfully used by different robots with varying sensor configurations for reliable self localization.


27th Conference on Modelling and Simulation | 2013

Automatic Map Creation For Environment Modelling In Robotic Simulators.

Thomas Wiemann; Kai Lingemann; Joachim Hertzberg

This paper presents an approach to automatically create polygonal maps for environment modeling in simulators based on 3D point cloud data gathered from 3D sensors like laser scanners or RGB-D cameras. The input point clouds are polygonalized using a modified Marching Cubes algorithm and optimized using a pipeline of mesh reduction and filtering steps. Optionally color information from the point clouds can be used to generate textures for the reconstructed geometry.


KI'11 Proceedings of the 34th Annual German conference on Advances in artificial intelligence | 2011

Model-based object recognition from 3D laser data

Martin Günther; Thomas Wiemann; Sven Albrecht; Joachim Hertzberg

This paper presents a method for recognizing objects in 3D point clouds. Based on a structural model of these objects, we generate hypotheses for the location and 6DoF pose of these models and verify them by matching a CAD model of the object into the point cloud. Our method only needs a CAD model of each object class; no previous training is required.


Artificial Intelligence | 2017

Model-based furniture recognition for building semantic object maps

Martin Günther; Thomas Wiemann; Sven Albrecht; Joachim Hertzberg

Abstract This paper presents an approach to creating a semantic map of an indoor environment incrementally and in closed loop, based on a series of 3D point clouds captured by a mobile robot using an RGB-D camera. Based on a semantic model about furniture objects (represented in an OWL-DL ontology with rules attached), we generate hypotheses for locations and 6DoF poses of object instances and verify them by matching a geometric model of the object (given as a CAD model) into the point cloud. The result, in addition to the registered point cloud, is a consistent mesh representation of the environment, further enriched by object models corresponding to the detected pieces of furniture. We demonstrate the robustness of our approach against occlusion and aperture limitations of the RGB-D frames, and against differences between the CAD models and the real objects. We evaluate the complete system on two challenging datasets featuring partial visibility and totaling over 800 frames. The results show complementary strengths and weaknesses of processing each frame directly vs. processing the fully registered scene, which accord with intuitive expectations.


international conference on advanced robotics | 2013

An evaluation of open source surface reconstruction software for robotic applications

Thomas Wiemann; Hendrik Annuth; Kai Lingemann; Joachim Hertzberg

Polygonal surface reconstruction is a growing field of interest in mobile robotics. Recently, several open source surface reconstruction software packages have become publicly available. This paper presents an extensive evaluation of several of such packages, with emphasis on their usability in robotic applications. The main aspects of the evaluation are run time, accuracy and topological correctness of the generated polygon meshes.


IFAC Proceedings Volumes | 2010

Towards Real Time Robot 6D Localization in a Polygonal Indoor Map Based on 3D ToF Camera Data

Jan Wülfing; Joachim Hertzberg; Kai Lingemann; Andreas Nüchter; Thomas Wiemann; Stefan Stiene

Abstract This paper reports a method and results for solving the following problem: Given a 3D polygonal indoor map and a mobile robot equipped with a 3D time of flight (ToF) camera, localize at frame rate the 6D robot pose with respect to the map. To solve the problem, the polygonal map is represented for efficient usage as a solid-leaf BSP tree; at each control cycle, the 6D pose change is estimated a priori from odometry or IMU, the expected ToF camera view at the prior pose sampled from the BSP tree, and the pose change estimation corrected a posteriori by fast ICP matching of the expected and the measured ToF image. Our experiments indicate that, first, the method is in fact real-time capable; second, the 6D pose is tracked reliably in a correct map under regular sensor conditions; and third, the tracking can recover from some faults induced by local map inaccuracies and transient or local sensing errors.


intelligent robots and systems | 2013

Automatic creation and application of texture patterns to 3D polygon maps

Kim Oliver Rinnewitz; Thomas Wiemann; Kai Lingemann; Joachim Hertzberg

Textured polygon meshes are becoming more and more important for robotic applications. In this paper we present an approach to automatically extract textures from colored 3D point cloud data and apply them to a polygonal reconstruction of the scene. The extracted textures are analyzed for existing patters and reused if several instances appear. Emphasis of this work is on minimizing the number of used pixels while maintaining a realistic impression of the scanned environment.


IAS | 2016

Data Handling in Large-Scale Surface Reconstruction

Thomas Wiemann; Marcel Mrozinski; Dominik Feldschnieders; Kai Lingemann; Joachim Hertzberg

Using high resolution laser scanners, it is possible to create consistent 3D point clouds of large outdoor environments in a short time. Mobile systems are able to measure whole cities efficiently and collect billions of data points. Such large amounts of data can usually not be processed on a mobile system. One approach to create a feasible environment representation that can be used on mobile robots is to compute a compact polygonal environment representation. This paper addresses problems and solutions when processing large point clouds for surface reconstruction.


european conference on mobile robots | 2015

SEMAP - a semantic environment mapping framework

Henning Deeken; Thomas Wiemann; Kai Lingemann; Joachim Hertzberg

This paper presents the SEMAP framework designed to maintain and analyze the spatial data of a multi-modal environment model. SEMAP uses a spatial database at its core to store metric data and link it to semantic descriptions via semantic annotation. Through in-built and custom-made spatial operators of a PostGIS database, we enable the spatial analysis of quantitative metric data, which we then use in the context of semantic mapping. We use SEMAP to query for task-specific sets of spatial and semantic data, to create semantically augmented metric navigation maps, and to extract implicit topological information from geometric data.

Collaboration


Dive into the Thomas Wiemann's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kai Lingemann

University of Osnabrück

View shared research outputs
Top Co-Authors

Avatar

Sven Albrecht

University of Osnabrück

View shared research outputs
Top Co-Authors

Avatar

Henning Deeken

University of Osnabrück

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stefan Stiene

University of Osnabrück

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andreas Bartel

University of Osnabrück

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge