Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joern Rehder is active.

Publication


Featured researches published by Joern Rehder.


The International Journal of Robotics Research | 2016

The EuRoC micro aerial vehicle datasets

Michael Burri; Janosch Nikolic; Pascal Gohl; Thomas Schneider; Joern Rehder; Sammy Omari; Markus W. Achtelik; Roland Siegwart

This paper presents visual-inertial datasets collected on-board a micro aerial vehicle. The datasets contain synchronized stereo images, IMU measurements and accurate ground truth. The first batch of datasets facilitates the design and evaluation of visual-inertial localization algorithms on real flight data. It was collected in an industrial environment and contains millimeter accurate position ground truth from a laser tracking system. The second batch of datasets is aimed at precise 3D environment reconstruction and was recorded in a room equipped with a motion capture system. The datasets contain 6D pose ground truth and a detailed 3D scan of the environment. Eleven datasets are provided in total, ranging from slow flights under good visual conditions to dynamic flights with motion blur and poor illumination, enabling researchers to thoroughly test and evaluate their algorithms. All datasets contain raw sensor measurements, spatio-temporally aligned sensor data and ground truth, extrinsic and intrinsic calibrations and datasets for custom calibrations.


intelligent robots and systems | 2013

Unified temporal and spatial calibration for multi-sensor systems

Paul Timothy Furgale; Joern Rehder; Roland Siegwart

In order to increase accuracy and robustness in state estimation for robotics, a growing number of applications rely on data from multiple complementary sensors. For the best performance in sensor fusion, these different sensors must be spatially and temporally registered with respect to each other. To this end, a number of approaches have been developed to estimate these system parameters in a two stage process, first estimating the time offset and subsequently solving for the spatial transformation between sensors. In this work, we present on a novel framework for jointly estimating the temporal offset between measurements of different sensors and their spatial displacements with respect to each other. The approach is enabled by continuous-time batch estimation and extends previous work by seamlessly incorporating time offsets within the rigorous theoretical framework of maximum likelihood estimation. Experimental results for a camera to inertial measurement unit (IMU) calibration prove the ability of this framework to accurately estimate time offsets up to a fraction of the smallest measurement period.


ieee aerospace conference | 2013

A UAV system for inspection of industrial facilities

Janosch Nikolic; Michael Burri; Joern Rehder; Stefan Leutenegger; Christoph Huerzeler; Roland Siegwart

This work presents a small-scale Unmanned Aerial System (UAS) capable of performing inspection tasks in enclosed industrial environments. Vehicles with such capabilities have the potential to reduce human involvement in hazardous tasks and can minimize facility outage periods. The results presented generalize to UAS exploration tasks in almost any GPS-denied indoor environment. The contribution of this work is twofold. First, results from autonomous flights inside an industrial boiler of a power plant are presented. A lightweight, vision-aided inertial navigation system provides reliable state estimates under difficult environmental conditions typical for such sites. It relies solely on measurements from an on-board MEMS inertial measurement unit and a pair of cameras arranged in a classical stereo configuration. A model-predictive controller allows for efficient trajectory following and enables flight in close proximity to the boiler surface. As a second contribution, we highlight ongoing developments by displaying state estimation and structure recovery results acquired with an integrated visual/inertial sensor that will be employed on future aerial service robotic platforms. A tight integration in hardware facilitates spatial and temporal calibration of the different sensors and thus enables more accurate and robust ego-motion estimates. Comparison with ground truth obtained from a laser tracker shows that such a sensor can provide motion estimates with drift rates of only few centimeters over the period of a typical flight.


intelligent robots and systems | 2011

Perception for a river mapping robot

Andrew Chambers; Supreeth Achar; Stephen Nuske; Joern Rehder; Bernd Kitt; Lyle Chamberlain; Justin Haines; Sebastian Scherer; Sanjiv Singh

Rivers with heavy vegetation are hard to map from the air. Here we consider the task of mapping their course and the vegetation along the shores with the specific intent of determining river width and canopy height. A complication in such riverine environments is that only intermittent GPS may be available depending on the thickness of the surrounding canopy. We present a multimodal perception system to be used for the active exploration and mapping of a river from a small rotorcraft flying a few meters above the water. We describe three key components that use computer vision, laser scanning, and inertial sensing to follow the river without the use of a prior map, estimate motion of the rotorcraft, ensure collision-free operation, and create a three dimensional representation of the riverine environment. While the ability to fly simplifies the navigation problem, it also introduces an additional set of constraints in terms of size, weight and power. Hence, our solutions are cognizant of the need to perform multi-kilometer missions with a small payload. We present experimental results along a 2km loop of river using a surrogate system.


international conference on robotics and automation | 2012

Global pose estimation with limited GPS and long range visual odometry

Joern Rehder; Kamal Gupta; Stephen Nuske; Sanjiv Singh

Here we present an approach to estimate the global pose of a vehicle in the face of two distinct problems; first, when using stereo visual odometry for relative motion estimation, a lack of features at close range causes a bias in the motion estimate. The other challenge is localizing in the global coordinate frame using very infrequent GPS measurements. Solving these problems we demonstrate a method to estimate and correct for the bias in visual odometry and a sensor fusion algorithm capable of exploiting sparse global measurements. Our graph-based state estimation framework is capable of inferring global orientation using a unified representation of local and global measurements and recovers from inaccurate initial estimates of the state, as intermittently available GPS information may delay the observability of the entire state. We also demonstrate a reduction of the complexity of the problem to achieve real-time throughput. In our experiments, we show in an outdoor dataset with distant features where our bias corrected visual odometry solution makes a fivefold improvement in the accuracy of the estimated translation compared to a standard approach. For a traverse of 2km we demonstrate the capabilities of our graph-based state estimation approach to successfully infer global orientation with as few as 6 GPS measurements and with two-fold improvement in mean position error using the corrected visual odometry.


IEEE Transactions on Robotics | 2016

A General Approach to Spatiotemporal Calibration in Multisensor Systems

Joern Rehder; Roland Siegwart; Paul Timothy Furgale

With growing demands for accuracy in sensor fusion, increasing attention is being paid to temporal offsets as a source of deterministic error when processing data from multiple devices. Established approaches for the calibration of temporal offsets exploit domain-specific heuristics of common sensor suites and utilize simplifications to circumvent some of the challenges arising when both temporal and spatial parameters are not accurately known a priori. These properties make it difficult to generalize the work to other applications or different combinations of sensors. This work presents a general and principled approach to joint estimation of temporal offsets and spatial transformations between sensors. Our framework exploits recent advances in continuous-time batch estimation and thus exists within the rigorous theoretical framework of maximum likelihood estimation. The derivation is presented without relying on unique properties of specific sensors and, therefore, represents the first general technique for temporal calibration in robotics. The broad applicability of this approach is demonstrated through spatiotemporal calibration of a camera with respect to an inertial measurement unit as well as between a stereo camera and a laser range finder. The method is shown to be more repeatable and accurate than the current state of the art, estimating spatial displacements to millimeter precision and temporal offsets to a fraction of the fastest measurement interval.


international conference on robotics and automation | 2016

Extending kalibr: Calibrating the extrinsics of multiple IMUs and of individual axes

Joern Rehder; Janosch Nikolic; Thomas Schneider; Timo Hinzmann; Roland Siegwart

An increasing number of robotic systems feature multiple inertial measurement units (IMUs). Due to competing objectives-either desired vicinity to the center of gravity when used in controls, or an unobstructed field of view when integrated in a sensor setup with an exteroceptive sensor for ego-motion estimation-individual IMUs are often mounted at considerable distance. As a result, they sense different accelerations when the platform is subjected to rotational motions. In this work, we derive a method for spatially calibrating multiple IMUs in a single estimator based on the open-source camera/IMU calibration toolbox kalibr. We further extend the toolbox to determine IMU intrinsics, enabling accurate calibration of low-cost IMUs. The results suggest that the extended estimator is capable of precisely determining these intrinsics and even of localizing individual accelerometer axes inside a commercial grade IMU to millimeter precision.


international conference on applied robotics for power industry | 2014

Towards autonomous mine inspection

Pascal Gohl; Michael Burri; Sammy Omari; Joern Rehder; Janosch Nikolic; Markus W. Achtelik; Roland Siegwart

The purpose of this paper is to evaluate the use of a micro aerial vehicle (MAV) for autonomous inspection and 3D reconstruction of underground mines. The goal is to manually fly an MAV equipped with cameras and a laser range sensor into a vertical shaft to collect data. This data can be used to evaluate the performance of the localization system as well as post processed to reconstruct a 3D model of the shaft. Due to its novelty of flying an MAV in a deep mine, we report gained experience of the effect of the hot, wet and dusty environment on the system as well as the influence of turbulences from vertical winds on the flight performance. Further we evaluated the quality of the recorded data and there applicability for a fully autonomous mine inspection system.


intelligent robots and systems | 2014

Spatio-temporal laser to visual/inertial calibration with applications to hand-held, large scale scanning

Joern Rehder; Paul A. Beardsley; Roland Siegwart; Paul Timothy Furgale

This work presents a novel approach to spatio-temporal calibration of a laser range finder (LRF) with respect to a combination of a stereo camera and an inertial measurement unit (IMU). Spatial calibration between an LRF and a camera has been extensively studied, but so far the temporal relationship between the two has largely been neglected. While this may be sufficient for applications where the setup is mounted on a vehicle, which imposes bounds on the dynamics, we aim for employment on a hand-held scanning device, where angular velocities can easily exceed hundreds of degrees per second. Employing a continuous-time batch estimation framework, this work demonstrates that the transformation between the LRF and the visual/inertial setup-but also its temporal relationship-can be estimated accurately. In contrast to the majority of established calibration approaches, our approach does not require an overlap in the field of view of the LRF and camera, allowing for previously infeasible sensor configurations to be calibrated. Preliminary results for a novel hand-held scanning device suggest improvements in 3D reconstructions and image based point cloud coloring, especially for highly dynamic motions.


international conference on robotics and automation | 2017

A direct formulation for camera calibration

Joern Rehder; Janosch Nikolic; Thomas Schneider; Roland Siegwart

Conventional camera calibration techniques rely on discrete reference points extracted from a set of input images. While these approaches have been applied successfully for a long time, omitting all image information apart from reference point positions at the initial stage of the calibration pipeline renders correct treatment of uncertainties difficult and gives rise to complications in timestamping measurements in applications where exposure time cannot be neglected. Drawing inspiration from visual state estimation, we employ a direct formulation of the camera measurement model. To this end, we render a view of the target given all calibration parameters, enabling a maximum likelihood estimator formulated on image intensities as measurements. We demonstrate the advantages of avoiding abstraction from image measurements for determining the line delay of a rolling shutter camera and by estimating camera exposure time from motion blur.

Collaboration


Dive into the Joern Rehder's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sanjiv Singh

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrew Chambers

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Stephen Nuske

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge