Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tim Bodenmüller is active.

Publication


Featured researches published by Tim Bodenmüller.


ISRR | 2011

Towards the Robotic Co-Worker

Sami Haddadin; Michael Suppa; Stefan Fuchs; Tim Bodenmüller; Alin Albu-Schäffer; Gerd Hirzinger

Recently, robots have gained capabilities in both sensing and actuation, which enable operation in the proximity of humans. Even direct physical interaction has become possible without suffering the decrease in speed and payload. The DLR Lightweight Robot III (LWR-III), whose technology is currently being transferred to the robot manufacturer KUKA Roboter GmbH, is such a device capable of realizing various features crucial for direct interaction with humans. Impedance control and collision detection with adequate reaction are key components for enabling “soft and safe” robotics. The implementation of a sensor based robotic co-worker that brings robots closer to humans in industrial settings and achieve close cooperation is an important goal in robotics. Despite being a common vision in robotics it has not become reality yet, as there are various open questions still to be answered. In this paper a sound concept and a prototype implementation of a co-worker scenario are developed in order to demonstrate that stateof- the-art technology is now mature enough to reach this aspiring aim. We support our ideas by addressing the industrially relevant bin-picking problem with the LWR-III, which is equipped with a Time-of-Flight camera for object recognition and the DLR 3D-Modeller for generating accurate environment models. The paper describes the sophisticated control schemes of the robot in combination with robust computer vision algorithms, which lead to a reliable solution for the addressed problem. Strategies are devised for safe interaction with the human during task execution, state depending robot behavior, and the appropriate mechanisms, to realize robustness in partially unstructured environments.


Journal of Real-time Image Processing | 2015

Efficient next-best-scan planning for autonomous 3D surface reconstruction of unknown objects

Simon Kriegel; Christian Rink; Tim Bodenmüller; Michael Suppa

This work focuses on autonomous surface reconstruction of small-scale objects with a robot and a 3D sensor. The aim is a high-quality surface model allowing for robotic applications such as grasping and manipulation. Our approach comprises the generation of next-best-scan (NBS) candidates and selection criteria, error minimization between scan patches and termination criteria. NBS candidates are iteratively determined by a boundary detection and surface trend estimation of the acquired model. To account for both a fast and high-quality model acquisition, that candidate is selected as NBS, which maximizes a utility function that integrates an exploration and a mesh-quality component. The modeling and scan planning methods are evaluated on an industrial robot with a high-precision laser striper system. While performing the new laser scan, data are integrated on-the-fly into both, a triangle mesh and a probabilistic voxel space. The efficiency of the system in fast acquisition of high-quality 3D surface models is proven with different cultural heritage, household and industrial objects.


international conference on robotics and automation | 2004

The DLR multisensory Hand-Guided Device: the Laser Stripe Profiler

Klaus H. Strobl; Wolfgang Sepp; Eric Wahl; Tim Bodenmüller; Michael Suppa; Javier F. Seara; Gerd Hirzinger

This paper presents the DLR Laser Stripe Profiler as a component of the DLR multisensory Hand-Guided Device for 3D modeling. After modeling the reconstruction process, we propose a novel method for laser plane self-calibration based on the assessment of the deformations the miscalibration leads to. In addition, the requirement for absence of optical filtering implies the development of a robust stripe segmentation algorithm. Experiments demonstrate the validity and applicability of the approaches.


international conference on robotics and automation | 2011

A surface-based Next-Best-View approach for automated 3D model completion of unknown objects

Simon Kriegel; Tim Bodenmüller; Michael Suppa; Gerd Hirzinger

The procedure of manually generating a 3D model of an object is very time consuming for a human operator. Next-best- view (NBV) planning is an important aspect for automation of this procedure in a robotic environment. We propose a surface-based NBV approach, which creates a triangle surface from a real-time data stream and determines viewpoints similar to human intuition. Thereby, the boundaries in the surface are detected and a quadratic patch for each boundary is estimated. Then several viewpoint candidates are calculated, which look perpendicular to the surface and overlap with previous sensor data. A NBV is selected with the goal to fill areas which are occluded. This approach focuses on the completion of a 3D model of an unknown object. Thereby, the search space for the viewpoints is not restricted to a cylinder or sphere. Our NBV determination proves to be very fast, and is evaluated in an experiment on test objects, applying an industrial robot and a laser range scanner.


intelligent robots and systems | 2013

Combining object modeling and recognition for active scene exploration

Simon Kriegel; Manuel Brucker; Zoltan-Csaba Marton; Tim Bodenmüller; Michael Suppa

Active scene exploration incorporates object recognition methods for analyzing a scene of partially known objects and exploration approaches for autonomous modeling of unknown parts. In this work, recognition, exploration, and planning methods are extended and combined in a single scene exploration system, enabling advanced techniques such as multi-view recognition from planned view positions and iterative recognition by integration of new objects from a scene. Here, a geometry based approach is used for recognition, i.e. matching objects from a database. Unknown objects are autonomously modeled and added to the recognition database. Next-Best-View planning is performed both for recognition and modeling. Moreover, 3D measurements are merged in a Probabilistic Voxel Space, which is utilized for planning collision free paths, minimal occlusion views, and verifying the poses of the recognized objects against all previous information. Experiments on an industrial robot with attached 3D sensors are shown for scenes with household and industrial objects.


intelligent robots and systems | 2009

The self-referenced DLR 3D-modeler

Klaus H. Strobl; Elmar Mair; Tim Bodenmüller; Simon Kielhöfer; Wolfgang Sepp; Michael Suppa; Darius Burschka; Gerd Hirzinger

In the context of 3-D scene modeling, this work aims at the accurate estimation of the pose of a close-range 3-D modeling device, in real-time and passively from its own images. This novel development makes it possible to abandon using inconvenient, expensive external positioning systems. The approach comprises an ego-motion algorithm tracking natural, distinctive features, concurrently with customary 3-D modeling of the scene. The use of stereo vision, an inertial measurement unit, and robust cost functions for pose estimation further increases performance. Demonstrations and abundant video material validate the approach.


intelligent robots and systems | 2012

Next-best-scan planning for autonomous 3D modeling

Simon Kriegel; Christian Rink; Tim Bodenmüller; Alexander Narr; Michael Suppa; Gerhard Hirzinger

We present a next-best-scan (NBS) planning approach for autonomous 3D modeling. The system successively completes a 3D model from complex shaped objects by iteratively selecting a NBS based on previously acquired data. For this purpose, new range data is accumulated in-the-loop into a 3D surface (streaming reconstruction) and new continuous scan paths along the estimated surface trend are generated. Further, the space around the object is explored using a probabilistic exploration approach that considers sensor uncertainty. This allows for collision free path planning in order to completely scan unknown objects. For each scan path, the expected information gain is determined and the best path is selected as NBS. The presented NBS approach is tested with a laser striper system, attached to an industrial robot. The results are compared to state-of-the-art next-best-view methods. Our results show promising performance with respect to completeness, quality and scan time.


2016 International Conference on Autonomous Robot Systems and Competitions (ICARSC) | 2016

The LRU Rover for Autonomous Planetary Exploration and Its Success in the SpaceBotCamp Challenge

Martin J. Schuster; Christoph Brand; Sebastian G. Brunner; Peter Lehner; Josef Reill; Sebastian Riedel; Tim Bodenmüller; Kristin Bussmann; Stefan Büttner; Andreas Dömel; Werner Friedl; Iris Lynne Grixa; Matthias Hellerer; Heiko Hirschmüller; Michael Kassecker; Zoltan-Csaba Marton; Christian Nissler; Felix Ruess; Michael Suppa; Armin Wedler

The task of planetary exploration poses many challenges for a robot system, from weight and size constraints to sensors and actuators suitable for extraterrestrial environment conditions. In this work, we present the Light Weight Rover Unit (LRU), a small and agile rover prototype that we designed for the challenges of planetary exploration. Its locomotion system with individually steered wheels allows for high maneuverability in rough terrain and the application of stereo cameras as its main sensor ensures the applicability to space missions. We implemented software components for self-localization in GPS-denied environments, environment mapping, object search and localization and for the autonomous pickup and assembly of objects with its arm. Additional high-level mission control components facilitate both autonomous behavior and remote monitoring of the system state over a delayed communication link. We successfully demonstrated the autonomous capabilities of our LRU at the SpaceBotCamp challenge, a national robotics contest with focus on autonomous planetary exploration. A robot had to autonomously explore a moon-like rough-terrain environment, locate and collect two objects and assemble them after transport to a third object - which the LRU did on its first try, in half of the time and fully autonomous.


international conference on robotics and automation | 2012

Sequential scene parsing using range and intensity information

Manuel Brucker; Simon Leonard; Tim Bodenmüller; Gregory D. Hager

This paper describes an extension of the sequential scene analysis system presented by Hager and Wegbreit [12]. In contrast to the original system, which was limited to scenes consisting of geometric primitives, such as spheres, cuboids, and cylinders computed from range data, the extended system is capable of dealing with arbitrarily shaped objects computed from range and intensity images. An object model composed of a triangulated geometry and intensity-based SURF features is introduced. The integration of prior object models into the sequential scene parsing framework is described. The extended system is evaluated with respect to pose estimation and its ability to handle complex scene sequences. It is shown that the new object models enable accurate pose estimation and reliable recognition even in highly cluttered scenes.


Künstliche Intelligenz | 2010

Real-time Image-based Localization for Hand-held 3D-modeling

Elmar Mair; Klaus H. Strobl; Tim Bodenmüller; Michael Suppa; Darius Burschka

We present a self-referencing hand-held scanning device for vision-based close-range 3D-modeling. Our approach replaces external global tracking devices with ego-motion estimation directly from the camera used for reconstruction. The system is capable of online estimation of the 6DoF pose on hand-held devices with high motion dynamics especially in rotational components. Inertial information supports directly the tracking process to allow for robust tracking and feature management in highly dynamic environments. We introduce a weighting function for landmarks that contribute to the pose estimation increasing the accuracy of the localization and filtering outliers in the tracking process. We validate our approach with experimental results showing the robustness and accuracy of the algorithm. We compare the results to external global referencing solutions used in current modeling systems.

Collaboration


Dive into the Tim Bodenmüller's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Franz Hacker

German Aerospace Center

View shared research outputs
Top Co-Authors

Avatar

Ulrich Hagn

German Aerospace Center

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge