Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dominik Honegger is active.

Publication


Featured researches published by Dominik Honegger.


intelligent robots and systems | 2012

Vision-based autonomous mapping and exploration using a quadrotor MAV

Friedrich Fraundorfer; Lionel Heng; Dominik Honegger; Gim Hee Lee; Lorenz Meier; Petri Tanskanen; Marc Pollefeys

In this paper, we describe our autonomous vision-based quadrotor MAV system which maps and explores unknown environments. All algorithms necessary for autonomous mapping and exploration run on-board the MAV. Using a front-looking stereo camera as the main exteroceptive sensor, our quadrotor achieves these capabilities with both the Vector Field Histogram+ (VFH+) algorithm for local navigation, and the frontier-based exploration algorithm. In addition, we implement the Bug algorithm for autonomous wall-following which could optionally be selected as the substitute exploration algorithm in sparse environments where the frontier-based exploration under-performs. We incrementally build a 3D global occupancy map on-board the MAV. The map is used by the VFH+ and frontier-based exploration in dense environments, and the Bug algorithm for wall-following in sparse environments. During the exploration phase, images from the front-looking camera are transmitted over Wi-Fi to the ground station. These images are input to a large-scale visual SLAM process running off-board on the ground station. SLAM is carried out with pose-graph optimization and loop closure detection using a vocabulary tree. We improve the robustness of the pose estimation by fusing optical flow and visual odometry. Optical flow data is provided by a customized downward-looking camera integrated with a microcontroller while visual odometry measurements are derived from the front-looking stereo camera. We verify our approaches with experimental results.


international conference on robotics and automation | 2013

An open source and open hardware embedded metric optical flow CMOS camera for indoor and outdoor applications

Dominik Honegger; Lorenz Meier; Petri Tanskanen; Marc Pollefeys

Robust velocity and position estimation at high update rates is crucial for mobile robot navigation. In recent years optical flow sensors based on computer mouse hardware chips have been shown to perform well on micro air vehicles. Since they require more light than present in typical indoor and outdoor low-light conditions, their practical use is limited. We present an open source and open hardware design 1 of an optical flow sensor based on a machine vision CMOS image sensor for indoor and outdoor applications with very high light sensitivity. Optical flow is estimated on an ARM Cortex M4 microcontroller in real-time at 250 Hz update rate. Angular rate compensation with a gyroscope and distance scaling using a ultrasonic sensor are performed onboard. The system is designed for further extension and adaption and shown in-flight on a micro air vehicle.


international conference on robotics and automation | 2015

PX4: A node-based multithreaded open source robotics framework for deeply embedded platforms

Lorenz Meier; Dominik Honegger; Marc Pollefeys

We present a novel, deeply embedded robotics middleware and programming environment. It uses a multithreaded, publish-subscribe design pattern and provides a Unix-like software interface for micro controller applications. We improve over the state of the art in deeply embedded open source systems by providing a modular and standards-oriented platform. Our system architecture is centered around a publish-subscribe object request broker on top of a POSIX application programming interface. This allows to reuse common Unix knowledge and experience, including a bash-like shell. We demonstrate with a vertical takeoff and landing (VTOL) use case that the system modularity is well suited for novel and experimental vehicle platforms. We also show how the system architecture allows a direct interface to ROS and to run individual processes either as native ROS nodes on Linux or nodes on the micro controller, maximizing interoperability. Our microcontroller-based execution environment has substantially lower latency and better hardware connectivity than a typical Robotics Linux system and is therefore well suited for fast, high rate control tasks.


Journal of Field Robotics | 2014

Autonomous Visual Mapping and Exploration With a Micro Aerial Vehicle

Lionel Heng; Dominik Honegger; Gim Hee Lee; Lorenz Meier; Petri Tanskanen; Friedrich Fraundorfer; Marc Pollefeys

Cameras are a natural fit for micro aerial vehicles MAVs due to their low weight, low power consumption, and two-dimensional field of view. However, computationally-intensive algorithms are required to infer the 3D structure of the environment from 2D image data. This requirement is made more difficult with the MAVs limited payload which only allows for one CPU board. Hence, we have to design efficient algorithms for state estimation, mapping, planning, and exploration. We implement a set of algorithms on two different vision-based MAV systems such that these algorithms enable the MAVs to map and explore unknown environments. By using both self-built and off-the-shelf systems, we show that our algorithms can be used on different platforms. All algorithms necessary for autonomous mapping and exploration run on-board the MAV. Using a front-looking stereo camera as the main sensor, we maintain a tiled octree-based 3D occupancy map. The MAV uses this map for local navigation and frontier-based exploration. In addition, we use a wall-following algorithm as an alternative exploration algorithm in open areas where frontier-based exploration under-performs. During the exploration, data is transmitted to the ground station which runs large-scale visual SLAM. We estimate the MAVs state with inertial data from an IMU together with metric velocity measurements from a custom-built optical flow sensor and pose estimates from visual odometry. We verify our approaches with experimental results, which to the best of our knowledge, demonstrate our MAVs to be the first vision-based MAVs to autonomously explore both indoor and outdoor environments.


intelligent robots and systems | 2014

Real-time and low latency embedded computer vision hardware based on a combination of FPGA and mobile CPU

Dominik Honegger; Helen Oleynikova; Marc Pollefeys

Recent developments in smartphones create an ideal platform for robotics and computer vision applications: they are small, powerful, embedded devices with low-power mobile CPUs. However, though the computational power of smartphones has increased substantially in recent years, they are still not capable of performing intense computer vision tasks in real time, at high frame rates and low latency. We present a combination of FPGA and mobile CPU to overcome the computational and latency limitations of mobile CPUs alone. With the FPGA as an additional layer between the image sensor and CPU, the system is capable of accelerating computer vision algorithms to real-time performance. Low latency calculation allows for direct usage within control loops of mobile robots. A stereo camera setup with disparity estimation based on the semi global matching algorithm is implemented as an accelerated example application. The system calculates dense disparity images with 752×480 pixels resolution at 60 frames per second. The overall latency of the disparity estimation is less than 2 milliseconds. The system is suitable for any mobile robot application due to its light weight and low power consumption.


international conference on robotics and automation | 2015

Reactive avoidance using embedded stereo vision for MAV flight

Helen Oleynikova; Dominik Honegger; Marc Pollefeys

High speed, low latency obstacle avoidance is essential for enabling Micro Aerial Vehicles (MAVs) to function in cluttered and dynamic environments. While other systems exist that do high-level mapping and 3D path planning for obstacle avoidance, most of these systems require high-powered CPUs on-board or off-board control from a ground station. We present a novel entirely on-board approach, leveraging a light-weight low power stereo vision system on FPGA. Our approach runs at a frame rate of 60 frames a second on VGA-sized images and minimizes latency between image acquisition and performing reactive maneuvers, allowing MAVs to fly more safely and robustly in complex environments. We also suggest our system as a light-weight safety layer for systems undertaking more complex tasks, like mapping the environment. Finally, we show our algorithm implemented on a lightweight, very computationally constrained platform, and demonstrate obstacle avoidance in a variety of environments.


intelligent robots and systems | 2015

Real-time 3D navigation for autonomous vision-guided MAVs

Shengdong Xu; Dominik Honegger; Marc Pollefeys; Lionel Heng

Autonomous navigation of micro aerial vehicles (MAVs) in a-priori unknown environments is one of the most challenging problems in robotics. First, a MAV has to incrementally build a 3D geometric map from raw sensor data. Then, based on the mapping information, the path planner has to search for a cost-optimal trajectory to the goal in real-time. It is common practice to discretize the search space into a state lattice; by doing so, we reduce the path planning problem with differential constraints to a graph search problem that is easier to solve. However, a regular 3D state lattice requires a large amount of memory while graph search in a regular 3D state lattice incorporating numerous states is computationally intensive. In this paper, we introduce a novel path planning algorithm which extends the concept of a regular state lattice to an octree-based state lattice, and searches for an optimal trajectory in the octree-partitioned search space. Our octree-based state lattice representation discretizes large swathes of free space into few symbolic octants, and thus, encodes a significantly fewer number of states. As a result, memory consumption is kept to a minimum, and at the same time, graph search is made more efficient. Simulation experiments demonstrate the efficiency of path planning with an octree-based state lattice, and further field trials prove the viability of this path planning algorithm.


intelligent robots and systems | 2015

Omnidirectional visual obstacle detection using embedded FPGA

Pascal Gohl; Dominik Honegger; Sammy Omari; Markus W. Achtelik; Marc Pollefeys; Roland Siegwart

For autonomous navigation of Micro Aerial Vehicles (MAVs) in cluttered environments, it is essential to detect potential obstacles not only in the direction of flight but in their entire local environment. While there exist systems that do vision based obstacle detection, most of them are limited to a single perception direction. Extending these systems to a multi-directional sensing approach would exhaust the payload limit in terms of weight and computational power. We present a novel light-weight sensor setup comprising of four stereo heads and an inertial measurement unit (IMU) to perform FPGA-based dense reconstruction for obstacle detection in all directions. As the data-rate scales up with the number of cameras we use an FPGA to perform streaming based tasks in real-time and show a light-weight polar-coordinate map to allow a companion computer to fully process the data of all the cameras and perform obstacle detection in real-time. The system is able to process up to 80 frames per second (fps) freely distributed on the four stereo heads while maintaining a low power budget. The perception system including FPGA, image sensors and stereo mounts is 235 g in weight.


international conference on robotics and automation | 2017

Real-time stereo matching failure prediction and resolution using orthogonal stereo setups

Lorenz Meier; Dominik Honegger; Vilhjalmur Vilhjalmsson; Marc Pollefeys

Estimating the depth from two images with a baseline has a well-known regular problem: When a line is parallel to the epipolar geometry it is not possible to estimate the depth from pixels on this line. Moreover, the classic measure for the certainty of the depth estimate fails as well: The matching score between the template and any pixel on the epipolar line is perfect. This results for common scenes in incorrect matches with very high confidence, some even resistant to left-right image checks. It is straightforward to try to address this by adding a second stereo head in a perpendicular direction. However, it is nontrivial to identify the failure and fuse the two depth maps in a real-time system. A simple weighted average will alleviate the problem but still result in a very large error in the depth map. Our contributions are: 1) We derive a model to predict the failure of stereo by leveraging the matching scores and 2) we propose a combined cost function to fuse two depth maps from orthogonal stereo heads using the failure prediction, matching score and consistency. We show the resulting system in real-time operation on a low-latency system in indoor, urban and natural environments.


international conference on robotics and automation | 2017

Embedded real-time multi-baseline stereo

Dominik Honegger; Torsten Sattler; Marc Pollefeys

Dense depth map estimation from stereo cameras has many applications in robotic vision, e.g., obstacle detection, especially when performed in real-time. The range in which depth values can be accurately estimated is usually limited for two-camera stereo setups due to the fixed baseline between the cameras. In addition, two-camera setups suffer from wrong depth estimates caused by local minima in the matching cost functions. Both problems can be alleviated by adding more cameras as this creates multiple baselines of different lengths and since multi-image matching leads to unique minima. However, using more cameras usually comes at an increase in run-time. In this paper, we present a novel embedded system for multi-baseline stereo. By exploiting the parallelization capabilities within FPGAs, we are able to estimate a depth map from multiple cameras in real-time. We show that our approach requires only little more power and weight compared to a two-camera stereo system. At the same time, we show that our system produces significantly better depth maps and is able to handle occlusion of some cameras, resulting in the redundancy typically desired for autonomous vehicles. Our system is small in size and leight-weight and can be employed even on a MAV platform with very strict power, weight, and size requirements.

Collaboration


Dive into the Dominik Honegger's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Friedrich Fraundorfer

Graz University of Technology

View shared research outputs
Top Co-Authors

Avatar

Gim Hee Lee

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge