Lionel Heng
ETH Zurich
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Lionel Heng.
intelligent robots and systems | 2012
Friedrich Fraundorfer; Lionel Heng; Dominik Honegger; Gim Hee Lee; Lorenz Meier; Petri Tanskanen; Marc Pollefeys
In this paper, we describe our autonomous vision-based quadrotor MAV system which maps and explores unknown environments. All algorithms necessary for autonomous mapping and exploration run on-board the MAV. Using a front-looking stereo camera as the main exteroceptive sensor, our quadrotor achieves these capabilities with both the Vector Field Histogram+ (VFH+) algorithm for local navigation, and the frontier-based exploration algorithm. In addition, we implement the Bug algorithm for autonomous wall-following which could optionally be selected as the substitute exploration algorithm in sparse environments where the frontier-based exploration under-performs. We incrementally build a 3D global occupancy map on-board the MAV. The map is used by the VFH+ and frontier-based exploration in dense environments, and the Bug algorithm for wall-following in sparse environments. During the exploration phase, images from the front-looking camera are transmitted over Wi-Fi to the ground station. These images are input to a large-scale visual SLAM process running off-board on the ground station. SLAM is carried out with pose-graph optimization and loop closure detection using a vocabulary tree. We improve the robustness of the pose estimation by fusing optical flow and visual odometry. Optical flow data is provided by a customized downward-looking camera integrated with a microcontroller while visual odometry measurements are derived from the front-looking stereo camera. We verify our approaches with experimental results.
Autonomous Robots | 2012
Lorenz Meier; Petri Tanskanen; Lionel Heng; Gim Hee Lee; Friedrich Fraundorfer; Marc Pollefeys
We describe a novel quadrotor Micro Air Vehicle (MAV) system that is designed to use computer vision algorithms within the flight control loop. The main contribution is a MAV system that is able to run both the vision-based flight control and stereo-vision-based obstacle detection parallelly on an embedded computer onboard the MAV. The system design features the integration of a powerful onboard computer and the synchronization of IMU-Vision measurements by hardware timestamping which allows tight integration of IMU measurements into the computer vision pipeline. We evaluate the accuracy of marker-based visual pose estimation for flight control and demonstrate marker-based autonomous flight including obstacle detection using stereo vision. We also show the benefits of our IMU-Vision synchronization for egomotion estimation in additional experiments where we use the synchronized measurements for pose estimation using the 2pt+gravity formulation of the PnP problem.
IEEE Robotics & Automation Magazine | 2014
Davide Scaramuzza; Michael Achtelik; Lefteris Doitsidis; Friedrich Fraundorfer; Elias B. Kosmatopoulos; Agostino Martinelli; Markus W. Achtelik; Margarita Chli; Savvas A. Chatzichristofis; Laurent Kneip; Daniel Gurdan; Lionel Heng; Gim Hee Lee; Simon Lynen; Lorenz Meier; Marc Pollefeys; Alessandro Renzaglia; Roland Siegwart; Jan Stumpf; Petri Tanskanen; Chiara Troiani; Stephan Weiss
Autonomous microhelicopters will soon play a major role in tasks like search and rescue, environment monitoring, security surveillance, and inspection. If they are further realized in small scale, they can also be used in narrow outdoor and indoor environments and represent only a limited risk for people. However, for such operations, navigating based only on global positioning system (GPS) information is not sufficient. Fully autonomous operation in cities or other dense environments requires microhelicopters to fly at low altitudes, where GPS signals are often shadowed, or indoors and to actively explore unknown environments while avoiding collisions and creating maps. This involves a number of challenges on all levels of helicopter design, perception, actuation, control, and navigation, which still have to be solved. The Swarm of Micro Flying Robots (SFLY) project was a European Union-funded project with the goal of creating a swarm of vision-controlled microaerial vehicles (MAVs) capable of autonomous navigation, three-dimensional (3-D) mapping, and optimal surveillance coverage in GPS-denied environments. The SFLY MAVs do not rely on remote control, radio beacons, or motion-capture systems but can fly all by themselves using only a single onboard camera and an inertial measurement unit (IMU). This article describes the technical challenges that have been faced and the results achieved from hardware design and embedded programming to vision-based navigation and mapping, with an overview of how all the modules work and how they have been integrated into the final system. Code, data sets, and videos are publicly available to the robotics community. Experimental results demonstrating three MAVs navigating autonomously in an unknown GPS-denied environment and performing 3-D mapping and optimal surveillance coverage are presented.
ieee intelligent vehicles symposium | 2013
Paul Timothy Furgale; Ulrich Schwesinger; Martin Rufli; Wojciech Waclaw Derendarz; Hugo Grimmett; Peter Mühlfellner; Stefan Wonneberger; Julian Timpner; Stephan Rottmann; Bo Li; Bastian Schmidt; Thien-Nghia Nguyen; Elena Cardarelli; Stefano Cattani; Stefan Brüning; Sven Horstmann; Martin Stellmacher; Holger Mielenz; Kevin Köser; Markus Beermann; Christian Häne; Lionel Heng; Gim Hee Lee; Friedrich Fraundorfer; Rene Iser; Rudolph Triebel; Ingmar Posner; Paul Newman; Lars C. Wolf; Marc Pollefeys
Future requirements for drastic reduction of CO2 production and energy consumption will lead to significant changes in the way we see mobility in the years to come. However, the automotive industry has identified significant barriers to the adoption of electric vehicles, including reduced driving range and greatly increased refueling times. Automated cars have the potential to reduce the environmental impact of driving, and increase the safety of motor vehicle travel. The current state-of-the-art in vehicle automation requires a suite of expensive sensors. While the cost of these sensors is decreasing, integrating them into electric cars will increase the price and represent another barrier to adoption. The V-Charge Project, funded by the European Commission, seeks to address these problems simultaneously by developing an electric automated car, outfitted with close-to-market sensors, which is able to automate valet parking and recharging for integration into a future transportation system. The final goal is the demonstration of a fully operational system including automated navigation and parking. This paper presents an overview of the V-Charge system, from the platform setup to the mapping, perception, and planning sub-systems.
international conference on robotics and automation | 2011
Lionel Heng; Lorenz Meier; Petri Tanskanen; Friedrich Fraundorfer; Marc Pollefeys
We present a novel stereo-based obstacle avoidance system on a vision-guided micro air vehicle (MAV) that is capable of fully autonomous maneuvers in unknown and dynamic environments. All algorithms run exclusively on the vehicles on-board computer, and at high frequencies that allow the MAV to react quickly to obstacles appearing in its flight trajectory. Our MAV platform is a quadrotor aircraft equipped with an inertial measurement unit and two stereo rigs. An obstacle mapping algorithm processes stereo images, producing a 3D map representation of the environment; at the same time, a dynamic anytime path planner plans a collision-free path to a goal point.
intelligent robots and systems | 2013
Lionel Heng; Bo Li; Marc Pollefeys
Multiple cameras are increasingly prevalent on robotic and human-driven vehicles. These cameras come in a variety of wide-angle, fish-eye, and catadioptric models. Furthermore, wheel odometry is generally available on the vehicles on which the cameras are mounted. For robustness, vision applications tend to use wheel odometry as a strong prior for camera pose estimation, and in these cases, an accurate extrinsic calibration is required in addition to an accurate intrinsic calibration. To date, there is no known work on automatic intrinsic calibration of generic cameras, and more importantly, automatic extrinsic calibration of a rig with multiple generic cameras and odometry. We propose an easy-to-use automated pipeline that handles both intrinsic and extrinsic calibration; we do not assume that there are overlapping fields of view. At the begining, we run an intrinsic calibration for each generic camera. The intrinsic calibration is automatic and requires a chessboard. Subsequently, we run an extrinsic calibration which finds all camera-odometry transforms. The extrinsic calibration is unsupervised, uses natural features, and only requires the vehicle to be driven around for a short time. The intrinsic parameters are optimized in a final bundle adjustment step in the extrinsic calibration. In addition, the pipeline produces a globally-consistent sparse map of landmarks which can be used for visual localization. The pipeline is publicly available as a standalone C++ package.
Journal of Field Robotics | 2014
Lionel Heng; Dominik Honegger; Gim Hee Lee; Lorenz Meier; Petri Tanskanen; Friedrich Fraundorfer; Marc Pollefeys
Cameras are a natural fit for micro aerial vehicles MAVs due to their low weight, low power consumption, and two-dimensional field of view. However, computationally-intensive algorithms are required to infer the 3D structure of the environment from 2D image data. This requirement is made more difficult with the MAVs limited payload which only allows for one CPU board. Hence, we have to design efficient algorithms for state estimation, mapping, planning, and exploration. We implement a set of algorithms on two different vision-based MAV systems such that these algorithms enable the MAVs to map and explore unknown environments. By using both self-built and off-the-shelf systems, we show that our algorithms can be used on different platforms. All algorithms necessary for autonomous mapping and exploration run on-board the MAV. Using a front-looking stereo camera as the main sensor, we maintain a tiled octree-based 3D occupancy map. The MAV uses this map for local navigation and frontier-based exploration. In addition, we use a wall-following algorithm as an alternative exploration algorithm in open areas where frontier-based exploration under-performs. During the exploration, data is transmitted to the ground station which runs large-scale visual SLAM. We estimate the MAVs state with inertial data from an IMU together with metric velocity measurements from a custom-built optical flow sensor and pose estimates from visual odometry. We verify our approaches with experimental results, which to the best of our knowledge, demonstrate our MAVs to be the first vision-based MAVs to autonomously explore both indoor and outdoor environments.
intelligent robots and systems | 2013
Bo Li; Lionel Heng; Kevin Köser; Marc Pollefeys
This paper presents a novel feature descriptor-based calibration pattern and a Matlab toolbox which uses the specially designed pattern to easily calibrate both the intrin-sics and extrinsics of a multiple-camera system. In contrast to existing calibration patterns, in particular, the ubiquitous chessboard, the proposed pattern contains many more features of varying scales; such features can be easily and automatically detected. The proposed toolbox supports the calibration of a camera system which can comprise either normal pinhole cameras or catadioptric cameras. The calibration only requires that neighboring cameras observe parts of the calibration pattern at the same time; the observed parts may not overlap at all. No overlapping fields of view are assumed for the camera system. We show that the toolbox can easily be used to automatically calibrate camera systems.
international conference on 3d vision | 2014
Christian Häne; Lionel Heng; Gim Hee Lee; Alexey Sizov; Marc Pollefeys
In this paper, we propose an adaptation of camera projection models for fisheye cameras into the plane-sweeping stereo matching algorithm. This adaptation allows us to do plane-sweeping stereo directly on fisheye images. Our approach also works for other non-pinhole cameras such as omni directional and catadioptric cameras when using the unified projection model. Despite the simplicity of our proposed approach, we are able to obtain full, good quality and high resolution depth maps from the fisheye images. To verify our approach, we show experimental results based on depth maps generated by our approach, and dense models produced from these depth maps.
international conference on robotics and automation | 2015
Lionel Heng; Alkis Gotovos; Andreas Krause; Marc Pollefeys
In this paper, we propose a novel and computationally efficient algorithm for simultaneous exploration and coverage with a vision-guided micro aerial vehicle (MAV) in unknown environments. This algorithm continually plans a path that allows the MAV to fulfil two objectives at the same time while avoiding obstacles: observe as much unexplored space as possible, and observe as much of the surface of the environment as possible given viewing angle and distance constraints. The former and latter objectives are known as the exploration and coverage problems respectively. Our algorithm is particularly useful for automated 3D reconstruction at the street level and in indoor environments where obstacles are omnipresent. By solving the exploration problem, we maximize the size of the reconstructed model. By solving the coverage problem, we maximize the completeness of the model. Our algorithm leverages the state lattice concept such that the planned path adheres to specified motion constraints. Furthermore, our algorithm is computationally efficient and able to run on-board the MAV in real-time. We assume that the MAV is equipped with a forward-looking depth-sensing camera in the form of either a stereo camera or RGB-D camera. We use simulation experiments to validate our algorithm. In addition, we show that our algorithm achieves a significantly higher level of coverage as compared to an exploration-only approach while still allowing the MAV to fully explore the environment.