Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where René Wagner is active.

Publication


Featured researches published by René Wagner.


intelligent robots and systems | 2011

Rapid development of manifold-based graph optimization systems for multi-sensor calibration and SLAM

René Wagner; Oliver Birbach; Udo Frese

Non-linear optimization on constraint graphs has recently been applied very successfully in a variety of SLAM backends. We combine this technique with a principled way of handling non-Euclidean spaces, 3D orientations in particular, based on manifolds to build a generic and very flexible framework, the Manifold Toolkit for Matlab (MTKM). We show that MTKM makes it particularly easy to solve non-trivial multi-sensor calibration problems while remaining generic enough to handle a very different class of problems, namely SLAM, as well: After an introductory example on single camera calibration we apply MTKM to calibration of stereo vision and IMU w.r.t. the kinematic chain of a service robot, RGB-D and accelerometer calibration of a Microsoft Kinect, stereo calibration on a Nao soccer robot, and several SLAM benchmark data sets illustrating MTKMs versatility. MTKM and all presented examples are available as open source from http://openslam.org/MTK.html.


international conference on robotics and automation | 2011

Experiences in building a visual SLAM system from open source components

Christoph Hertzberg; René Wagner; Oliver Birbach; Tobias Hammer; Udo Frese

This paper shows that the field of visual SLAM has matured enough to build a visual SLAM system from open source components. The system consists of feature detection, data association, and sparse bundle adjustment. For all three modules we evaluate different libraries w.r.t. ground truth.


international conference on robotics and automation | 2013

3D modeling, distance and gradient computation for motion planning: A direct GPGPU approach

René Wagner; Udo Frese; Berthold Bäuml

The Kinect sensor and KinectFusion algorithm have revolutionized environment modeling. We bring these advances to optimization-based motion planning by computing the obstacle and self-collision avoidance objective functions and their gradients directly from the KinectFusion model on the GPU without ever transferring any model to the CPU. Based on this, we implement a proof-of-concept motion planner which we validate in an experiment with a 19-DOF humanoid robot using real data from a tabletop work space. The summed-up time from taking the first look at the scene until the planned path avoiding an obstacle on the table is executed is only three seconds.


intelligent robots and systems | 2013

Real-time dense multi-scale workspace modeling on a humanoid robot

René Wagner; Udo Frese; Berthold Bäuml

Without a precise and up-to-date model of its environment a humanoid robot cannot move safely or act usefully. Ideally, the robot should create a dense 3D environment model in real-time, all the time, and respect obstacle information from it in every move it makes as well as obtain the information it needs for fine manipulation with its fingers from the same map. We propose to use a multi-scale truncated signed distance function (TSDF) map consisting of concentric, nested cubes with exponentially decreasing resolution for this purpose. We show how to extend the KinectFusion real-time SLAM algorithm to the multi-scale case as well as how to compute a multi-scale Euclidean distance transform (EDT) thereby establishing the link to optimization-based planning. We overcome the inability of KinectFusions localization to handle scenes without enough constraining geometry by switching to mapping-with-known-poses based on forward kinematics. The latter is always available and we know when it is precise. The resulting map has the desired properties: It is computed in real-time (7.5 ms per depth frame for a (8 m)3 multi-scale TSDF volume), covers the entire laboratory, does not depend on scene properties (geometry, texture, etc.) and is precise enough to facilitate grasp planning for fine manipulation tasks - all in a single map.


Künstliche Intelligenz | 2010

A SLAM Overview from a User’s Perspective

Udo Frese; René Wagner; Thomas Röfer

This paper gives a brief overview on the Simultaneous Localization and Mapping (SLAM) problem from the perspective of using SLAM for an application as opposed to the common view in SLAM research papers that focus on investigating SLAM itself.We discuss different ways of using SLAM with increasing difficulty: for creating a map prior to operation, as a black-box localization system, and for providing a growing online map during operation.We also discuss the common variants of SLAM based on 2-D evidence grids, 2-D pose graphs, 2-D features, 3-D visual features, and 3-D pose graphs together with their pros and cons for applications. We point to implementations available on the Internet and give advice on which approach suits which application from our experience.


international conference on robotics and automation | 2014

Agile Justin: An upgraded member of DLR's family of lightweight and torque controlled humanoids

Berthold Bäuml; Tobias Hammer; René Wagner; Oliver Birbach; T. Gumpert; F. Zhi; U. Hillenbrand; S. Beer; W. Friedl; J. Butterfass

This video presents the recent upgrades of the mobile humanoid Agile Justin, bringing it closer to an ideal platform for research in autonomous manipulation. Significant upgrades have been made in the fields of mechatronics, 3D sensors, tactile skin, massive GPGPU based computing power, and software communication framework. In addition, first algorithms and two experimental scenarios are presented that take advantage of these new capabilities.


intelligent robots and systems | 2014

Graph SLAM with signed distance function maps on a humanoid robot

René Wagner; Udo Frese; Berthold Bäuml

For such common tasks as motion planning or object recognition robots need to perceive their environment and create a dense 3D map of it. A recent breakthrough in this area was the KinectFusion algorithm [16], which relies on step by step matching a depth image to the map via ICP to recover the sensor pose and updating the map based on that pose. In so far it ignores techniques developed in the graph-SLAM area such as fusion with odometry, modeling of uncertainty and distributing an observed inconsistency over the map. This paper presents a method to integrate a dense geometric truncated signed distance function (TSDF) representation as KinectFusion uses with a sparse parametric representation as common in graph SLAM. The key idea is to have local TSDF sub-maps attached to reference nodes in the SLAM graph and derive graph-SLAM links via ICP by matching a map to a depth image. By moving these reference nodes according to the graph-SLAM estimate, the overall map can be deformed without touching individual sub-maps so that re-building of sub-maps is only needed in case of significant deformation within a sub-map. Also, further information can be added to the graph as common in graph SLAM. Examples are odometry or the fact that the ground is roughly but not exactly planar. Additionally, the paper proposes a modification of the KinectFusion algorithm to improve handling of long range data by taking the range dependent uncertainty into account.


ieee-ras international conference on humanoid robots | 2016

Unified treatment of sparse and dense data in graph-based least squares

René Wagner; Udo Frese; Berthold Bäuml

In this paper, we present a novel method of incorporating dense (e.g., depth, RGB-D) data in a general purpose least-squares graph optimization framework. Rather than employing a loosely coupled, layered design where dense data is first used to estimate a compact SE(3) transform which then forms a link in the optimization graph as in previous approaches [28, 10, 26], we use a tightly coupled approach that jointly optimizes over each individual (i.e. per-pixel) dense measurement (on the GPU) and all other traditional sparse measurements (on the CPU). Concretely, we use Kinect depth data and KinectFusion-style point-to-plane ICP measurements. In particular, this allows our approach to handle cases where neither dense, nor sparse measurements separately define all degrees of freedom (DoF) while taken together they complement each other and yield the overall maximum likelihood solution. Nowadays it is common practice to flexibly model various sensors, measurements and to be estimated variables in least-squares frameworks. Our intention is to extend this flexibility to applications with dense data. Computationally, the key is to combine the many dense measurements on the GPU efficiently and communicate only the results to the sparse framework on the CPU in a way that is mathematically equivalent to the full least-squares system. This results in <;20 ms for a full optimization run. We evaluate our approach on a humanoid robot, where in a first experiment we fuse Kinect data and odometry in a laboratory setting, and in a second experiment we fuse with an unusual “sensor”: using the embodiedness of the robot we estimate elasticities in the kinematic chain modeled as unknown, time-varying joint offsets while it moves its arms in front of a tabletop manipulation workspace. In both experiments only tightly coupled optimization will localize the robot correctly.


Information Fusion | 2013

Integrating generic sensor fusion algorithms with sound state representations through encapsulation of manifolds

Christoph Hertzberg; René Wagner; Udo Frese; Lutz Schröder


international conference spatial cognition | 2012

Tutorial on quick and easy model fitting using the SLoM framework

Christoph Hertzberg; René Wagner; Udo Frese

Collaboration


Dive into the René Wagner's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lutz Schröder

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge