Timo Hinzmann
ETH Zurich
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Timo Hinzmann.
international conference on robotics and automation | 2016
Joern Rehder; Janosch Nikolic; Thomas Schneider; Timo Hinzmann; Roland Siegwart
An increasing number of robotic systems feature multiple inertial measurement units (IMUs). Due to competing objectives-either desired vicinity to the center of gravity when used in controls, or an unobstructed field of view when integrated in a sensor setup with an exteroceptive sensor for ego-motion estimation-individual IMUs are often mounted at considerable distance. As a result, they sense different accelerations when the platform is subjected to rotational motions. In this work, we derive a method for spatially calibrating multiple IMUs in a single estimator based on the open-source camera/IMU calibration toolbox kalibr. We further extend the toolbox to determine IMU intrinsics, enabling accurate calibration of low-cost IMUs. The results suggest that the extended estimator is capable of precisely determining these intrinsics and even of localizing individual accelerometer axes inside a commercial grade IMU to millimeter precision.
pacific rim international conference on multi-agents | 2016
Patrick Doherty; Jonas Kvarnström; Piotr Rudol; Mariusz Wzorek; Gianpaolo Conte; Cyrille Berger; Timo Hinzmann; Thomas Stastny
This paper describes an overview of a generic framework for collaboration among humans and multiple heterogeneous robotic systems based on the use of a formal characterization of delegation as a speech act. The system used contains a complex set of integrated software modules that include delegation managers for each platform, a task specification language for characterizing distributed tasks, a task planner, a multi-agent scan trajectory generation and region partitioning module, and a system infrastructure used to distributively instantiate any number of robotic systems and user interfaces in a collaborative team. The application focusses on 3D reconstruction in alpine environments intended to be used by alpine rescue teams. Two complex UAV systems used in the experiments are described. A fully autonomous collaborative mission executed in the Italian Alps using the framework is also described.
Journal of Field Robotics | 2018
Philipp Oettershagen; Thomas Stastny; Timo Hinzmann; Konrad Rudin; Thomas Mantel; Amir Melzer; Bartosz Wawrzacz; Gregory Hitz; Roland Siegwart
Large-scale aerial sensing missions can greatly benefit from the perpetual endurance capability provided by high-performance low-altitude solar-powered UAVs. However, today these UAVs suffer from small payload capacity, low energetic margins and high operational complexity. To tackle these problems, this paper presents four individual technical contributions and integrates them into an existing solar-powered UAV system: First, a lightweight and power-efficient day/night-capable sensing system is discussed. Second, means to optimize the UAV platform to the specific payload and to thereby achieve sufficient energetic margins for day/night-flight with payload are presented. Third, existing autonomous launch and landing functionality is extended for solar-powered UAVs. Fourth, as a main contribution an extended Kalman filter-based autonomous thermal updraft tracking framework is developed. Its novelty is that it allows the end-to-end integration of the thermal-induced roll moment into the estimation process. It is assessed against unscented Kalman filter and particle filter methods in simulation and implemented on the aircraft’s low-power autopilot. The complete system is verified during a 26-hour search-and-rescue aerial sensing mockup mission that represents the first-ever fully-autonomous perpetual endurance flight of a small solar-powered UAV with a day/night-capable sensing payload. It also represents the first time that solar-electric propulsion and autonomous thermal updraft tracking are combined in flight. In contrast to previous work that has focused on the energetic feasibility of perpetual flight, the individual technical contributions of this paper are considered core functionality to guarantee ease-of-use, effectivity and reliability in future multi-day aerial sensing operations with small solar-powered UAVs.
field and service robotics | 2018
Timo Hinzmann; Johannes L. Schönberger; Marc Pollefeys; Roland Siegwart
The reduced operational cost and increased robustness of unmanned aerial vehicles has made them a ubiquitous tool in the commercial, industrial and scientific sector. Especially the ability to map and surveil a large area in a short amount of time makes them interesting for various applications. Generating a map in real-time is essential for first response teams in disaster scenarios such as, e.g. earthquakes, floods, or avalanches or may help other UAVs to localize without the need of Global Navigation Satellite Systems. For this application, we implemented a mapping framework that incrementally generates a dense georeferenced 3D point cloud, a digital surface model, and an orthomosaic and we support our design choices with respect to computational costs and its performance in diverse terrain. For accurate estimation of the camera poses, we employ a cost-efficient sensor setup consisting of a monocular visual-inertial camera rig as well as a Global Positioning System receiver, which we fuse using an incremental smoothing algorithm. We validate our mapping framework on a synthetic dataset embedded in a hardware-in-the-loop environment and in a real-world experiment using a fixed-wing UAV. Finally, we show that our framework outperforms existing orthomosaic generation methods by an order of magnitude in terms of timing, making real-time reconstruction and orthomosaic generation feasible onboard of unmanned aerial vehicles.
international conference on robotics and automation | 2017
Mathias Gehrig; Elena Stumm; Timo Hinzmann; Roland Siegwart
We propose a novel scoring concept for visual place recognition based on nearest neighbor descriptor voting and demonstrate how the algorithm naturally emerges from the problem formulation. Based on the observation that the number of votes for matching places can be evaluated using a binomial distribution model, loop closures can be detected with high precision. By casting the problem into a probabilistic framework, we not only remove the need for commonly employed heuristic parameters but also provide a powerful score to classify matching and non-matching places. We present methods for both a 2D-2D image matching and a 2D-3D landmark matching based on the above scoring. The approach maintains accuracy while being efficient enough for online application through the use of compact (low-dimensional) descriptors and fast nearest neighbor retrieval techniques. The proposed methods are evaluated on several challenging datasets in varied environments, showing state-of-the-art results with high precision and high recall.
international symposium on experimental robotics | 2016
Timo Hinzmann; Thomas Stastny; Gianpaolo Conte; Patrick Doherty; Piotr Rudol; Mariusz Wzorek; Enric Galceran; Roland Siegwart; Igor Gilitschenski
This paper demonstrates how a heterogeneous fleet of unmanned aerial vehicles (UAVs) can support human operators in search and rescue (SaR) scenarios. We describe a fully autonomous delegation framework that interprets the top-level commands of the rescue team and converts them into actions of the UAVs. In particular, the UAVs are requested to autonomously scan a search area and to provide the operator with a consistent georeferenced 3D reconstruction of the environment to increase the environmental awareness and to support critical decision-making. The mission is executed based on the individual platform and sensor capabilities of rotary- and fixed-wing UAVs (RW-UAV and FW-UAV respectively): With the aid of an optical camera, the FW-UAV can generate a sparse point-cloud of a large area in a short amount of time. A LiDAR mounted on the autonomous helicopter is used to refine the visual point-cloud by generating denser point-clouds of specific areas of interest. In this context, we evaluate the performance of point-cloud registration methods to align two maps that were obtained by different sensors. In our validation, we compare classical point-cloud alignment methods to a novel probabilistic data association approach that specifically takes the individual point-cloud densities into consideration.
intelligent robots and systems | 2016
Timo Hinzmann; Thomas Schneider; Marcin Dymczyk; Amir Melzer; Thomas Mantel; Roland Siegwart; Igor Gilitschenski
Accurate and robust real-time map generation onboard of a fixed-wing UAV is essential for obstacle avoidance, path planning, and critical maneuvers such as autonomous take-off and landing. Due to the computational constraints, the required robustness and reliability, it remains a challenge to deploy a fixed-wing UAV with an online-capable, accurate and robust map generation framework. While photogrammetric approaches have underlying assumptions on the structure and the view of the camera, generic simultaneous localization and mapping (SLAM) approaches are computationally demanding. This paper presents a framework that uses the autopilots state estimate as a prior for sliding window bundle adjustment and map generation. Our approach outputs an accurate geo-referenced dense point-cloud which was validated in simulation on a synthetic dataset and on two real-world scenarios based on ground control points.
international symposium on visual computing | 2016
Julius Kümmerle; Timo Hinzmann; Anurag Sai Vempati; Roland Siegwart
We propose a real-time system to detect and track multiple humans from high bird’s-eye views. First, we present a fast pipeline to detect humans observed from large distances by efficiently fusing information from a visual and infrared spectrum camera. The main contribution of our work is a new tracking approach. Its novelty lies in online learning of an objectness model which is used for updating a Kalman filter. We show that an adaptive objectness model outperforms a fixed model. Our system achieves a mean tracking loop time of 0.8 ms per human on a 2 GHz CPU which makes real time tracking of multiple humans possible.
international symposium on visual computing | 2016
Timo Hinzmann; Thomas Schneider; Marcin Dymczyk; Andreas Schaffner; Simon Lynen; Roland Siegwart; Igor Gilitschenski
Precise real-time information about the position and orientation of robotic platforms as well as locally consistent point-clouds are essential for control, navigation, and obstacle avoidance. For years, GPS has been the central source of navigational information in airborne applications, yet as we aim for robotic operations close to the terrain and urban environments, alternatives to GPS need to be found. Fusing data from cameras and inertial measurement units in a nonlinear recursive estimator has shown to allow precise estimation of 6-Degree-of-Freedom (DoF) motion without relying on GPS signals. While related methods have shown to work in lab conditions since several years, only recently real-world robotic applications using visual-inertial state estimation found wider adoption. Due to the computational constraints, and the required robustness and reliability, it remains a challenge to employ a visual-inertial navigation system in the field. This paper presents our tightly integrated system involving hardware and software efforts to provide an accurate visual-inertial navigation system for low-altitude fixed-wing unmanned aerial vehicles (UAVs) without relying on GPS or visual beacons. In particular, we present a sliding window based visual-inertial Simultaneous Localization and Mapping (SLAM) algorithm which provides real-time 6-DoF estimates for control. We demonstrate the performance on a small unmanned aerial vehicle and compare the estimated trajectory to a GPS based reference solution.
Journal of Field Robotics | 2017
Philipp Oettershagen; Amir Melzer; Thomas Mantel; Konrad Rudin; Thomas Stastny; Bartosz Wawrzacz; Timo Hinzmann; Stefan Leutenegger; Kostas Alexis; Roland Siegwart