Lorenz Meier
ETH Zurich
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Lorenz Meier.
intelligent robots and systems | 2012
Friedrich Fraundorfer; Lionel Heng; Dominik Honegger; Gim Hee Lee; Lorenz Meier; Petri Tanskanen; Marc Pollefeys
In this paper, we describe our autonomous vision-based quadrotor MAV system which maps and explores unknown environments. All algorithms necessary for autonomous mapping and exploration run on-board the MAV. Using a front-looking stereo camera as the main exteroceptive sensor, our quadrotor achieves these capabilities with both the Vector Field Histogram+ (VFH+) algorithm for local navigation, and the frontier-based exploration algorithm. In addition, we implement the Bug algorithm for autonomous wall-following which could optionally be selected as the substitute exploration algorithm in sparse environments where the frontier-based exploration under-performs. We incrementally build a 3D global occupancy map on-board the MAV. The map is used by the VFH+ and frontier-based exploration in dense environments, and the Bug algorithm for wall-following in sparse environments. During the exploration phase, images from the front-looking camera are transmitted over Wi-Fi to the ground station. These images are input to a large-scale visual SLAM process running off-board on the ground station. SLAM is carried out with pose-graph optimization and loop closure detection using a vocabulary tree. We improve the robustness of the pose estimation by fusing optical flow and visual odometry. Optical flow data is provided by a customized downward-looking camera integrated with a microcontroller while visual odometry measurements are derived from the front-looking stereo camera. We verify our approaches with experimental results.
Autonomous Robots | 2012
Lorenz Meier; Petri Tanskanen; Lionel Heng; Gim Hee Lee; Friedrich Fraundorfer; Marc Pollefeys
We describe a novel quadrotor Micro Air Vehicle (MAV) system that is designed to use computer vision algorithms within the flight control loop. The main contribution is a MAV system that is able to run both the vision-based flight control and stereo-vision-based obstacle detection parallelly on an embedded computer onboard the MAV. The system design features the integration of a powerful onboard computer and the synchronization of IMU-Vision measurements by hardware timestamping which allows tight integration of IMU measurements into the computer vision pipeline. We evaluate the accuracy of marker-based visual pose estimation for flight control and demonstrate marker-based autonomous flight including obstacle detection using stereo vision. We also show the benefits of our IMU-Vision synchronization for egomotion estimation in additional experiments where we use the synchronized measurements for pose estimation using the 2pt+gravity formulation of the PnP problem.
international conference on robotics and automation | 2011
Lorenz Meier; Petri Tanskanen; Friedrich Fraundorfer; Marc Pollefeys
We provide a novel hardware and software system for micro air vehicles (MAV) that allows high-speed, low-latency onboard image processing. It uses up to four cameras in parallel on a miniature rotary wing platform. The MAV navigates based on onboard processed computer vision in GPS-denied in- and outdoor environments. It can process in parallel images and inertial measurement information from multiple cameras for multiple purposes (localization, pattern recognition, obstacle avoidance) by distributing the images on a central, low-latency image hub. Furthermore the system can utilize low-bandwith radio links for communication and is designed and optimized to scale to swarm use. Experimental results show successful flight with a range of onboard computer vision algorithms, including localization, obstacle avoidance and pattern recognition.
international conference on computer vision | 2013
Petri Tanskanen; Kalin Kolev; Lorenz Meier; Federico Camposeco; Olivier Saurer; Marc Pollefeys
In this paper, we propose a complete on-device 3D reconstruction pipeline for mobile monocular hand-held devices, which generates dense 3D models with absolute scale on-site while simultaneously supplying the user with real-time interactive feedback. The method fills a gap in current cloud-based mobile reconstruction services as it ensures at capture time that the acquired image set fulfills desired quality and completeness criteria. In contrast to existing systems, the developed framework offers multiple innovative solutions. In particular, we investigate the usability of the available on-device inertial sensors to make the tracking and mapping process more resilient to rapid motions and to estimate the metric scale of the captured scene. Moreover, we propose an efficient and accurate scheme for dense stereo matching which allows to reduce the processing time to interactive speed. We demonstrate the performance of the reconstruction pipeline on multiple challenging indoor and outdoor scenes of different size and depth variability.
international conference on robotics and automation | 2013
Dominik Honegger; Lorenz Meier; Petri Tanskanen; Marc Pollefeys
Robust velocity and position estimation at high update rates is crucial for mobile robot navigation. In recent years optical flow sensors based on computer mouse hardware chips have been shown to perform well on micro air vehicles. Since they require more light than present in typical indoor and outdoor low-light conditions, their practical use is limited. We present an open source and open hardware design 1 of an optical flow sensor based on a machine vision CMOS image sensor for indoor and outdoor applications with very high light sensitivity. Optical flow is estimated on an ARM Cortex M4 microcontroller in real-time at 250 Hz update rate. Angular rate compensation with a gyroscope and distance scaling using a ultrasonic sensor are performed onboard. The system is designed for further extension and adaption and shown in-flight on a micro air vehicle.
IEEE Robotics & Automation Magazine | 2014
Davide Scaramuzza; Michael Achtelik; Lefteris Doitsidis; Friedrich Fraundorfer; Elias B. Kosmatopoulos; Agostino Martinelli; Markus W. Achtelik; Margarita Chli; Savvas A. Chatzichristofis; Laurent Kneip; Daniel Gurdan; Lionel Heng; Gim Hee Lee; Simon Lynen; Lorenz Meier; Marc Pollefeys; Alessandro Renzaglia; Roland Siegwart; Jan Stumpf; Petri Tanskanen; Chiara Troiani; Stephan Weiss
Autonomous microhelicopters will soon play a major role in tasks like search and rescue, environment monitoring, security surveillance, and inspection. If they are further realized in small scale, they can also be used in narrow outdoor and indoor environments and represent only a limited risk for people. However, for such operations, navigating based only on global positioning system (GPS) information is not sufficient. Fully autonomous operation in cities or other dense environments requires microhelicopters to fly at low altitudes, where GPS signals are often shadowed, or indoors and to actively explore unknown environments while avoiding collisions and creating maps. This involves a number of challenges on all levels of helicopter design, perception, actuation, control, and navigation, which still have to be solved. The Swarm of Micro Flying Robots (SFLY) project was a European Union-funded project with the goal of creating a swarm of vision-controlled microaerial vehicles (MAVs) capable of autonomous navigation, three-dimensional (3-D) mapping, and optimal surveillance coverage in GPS-denied environments. The SFLY MAVs do not rely on remote control, radio beacons, or motion-capture systems but can fly all by themselves using only a single onboard camera and an inertial measurement unit (IMU). This article describes the technical challenges that have been faced and the results achieved from hardware design and embedded programming to vision-based navigation and mapping, with an overview of how all the modules work and how they have been integrated into the final system. Code, data sets, and videos are publicly available to the robotics community. Experimental results demonstrating three MAVs navigating autonomously in an unknown GPS-denied environment and performing 3-D mapping and optimal surveillance coverage are presented.
international conference on robotics and automation | 2015
Lorenz Meier; Dominik Honegger; Marc Pollefeys
We present a novel, deeply embedded robotics middleware and programming environment. It uses a multithreaded, publish-subscribe design pattern and provides a Unix-like software interface for micro controller applications. We improve over the state of the art in deeply embedded open source systems by providing a modular and standards-oriented platform. Our system architecture is centered around a publish-subscribe object request broker on top of a POSIX application programming interface. This allows to reuse common Unix knowledge and experience, including a bash-like shell. We demonstrate with a vertical takeoff and landing (VTOL) use case that the system modularity is well suited for novel and experimental vehicle platforms. We also show how the system architecture allows a direct interface to ROS and to run individual processes either as native ROS nodes on Linux or nodes on the micro controller, maximizing interoperability. Our microcontroller-based execution environment has substantially lower latency and better hardware connectivity than a typical Robotics Linux system and is therefore well suited for fast, high rate control tasks.
international conference on robotics and automation | 2011
Lionel Heng; Lorenz Meier; Petri Tanskanen; Friedrich Fraundorfer; Marc Pollefeys
We present a novel stereo-based obstacle avoidance system on a vision-guided micro air vehicle (MAV) that is capable of fully autonomous maneuvers in unknown and dynamic environments. All algorithms run exclusively on the vehicles on-board computer, and at high frequencies that allow the MAV to react quickly to obstacles appearing in its flight trajectory. Our MAV platform is a quadrotor aircraft equipped with an inertial measurement unit and two stereo rigs. An obstacle mapping algorithm processes stereo images, producing a 3D map representation of the environment; at the same time, a dynamic anytime path planner plans a collision-free path to a goal point.
Journal of Field Robotics | 2014
Lionel Heng; Dominik Honegger; Gim Hee Lee; Lorenz Meier; Petri Tanskanen; Friedrich Fraundorfer; Marc Pollefeys
Cameras are a natural fit for micro aerial vehicles MAVs due to their low weight, low power consumption, and two-dimensional field of view. However, computationally-intensive algorithms are required to infer the 3D structure of the environment from 2D image data. This requirement is made more difficult with the MAVs limited payload which only allows for one CPU board. Hence, we have to design efficient algorithms for state estimation, mapping, planning, and exploration. We implement a set of algorithms on two different vision-based MAV systems such that these algorithms enable the MAVs to map and explore unknown environments. By using both self-built and off-the-shelf systems, we show that our algorithms can be used on different platforms. All algorithms necessary for autonomous mapping and exploration run on-board the MAV. Using a front-looking stereo camera as the main sensor, we maintain a tiled octree-based 3D occupancy map. The MAV uses this map for local navigation and frontier-based exploration. In addition, we use a wall-following algorithm as an alternative exploration algorithm in open areas where frontier-based exploration under-performs. During the exploration, data is transmitted to the ground station which runs large-scale visual SLAM. We estimate the MAVs state with inertial data from an IMU together with metric velocity measurements from a custom-built optical flow sensor and pose estimates from visual odometry. We verify our approaches with experimental results, which to the best of our knowledge, demonstrate our MAVs to be the first vision-based MAVs to autonomously explore both indoor and outdoor environments.
Journal of Vision | 2002
Lorenz Meier; Matteo Carandini
Perception of an oriented pattern is impaired in the presence of a superimposed orthogonal mask. This masking effect most likely arises in visual cortex, where neuronal responses are suppressed by masks having a broad range of orientations. Response suppression is commonly ascribed to lateral inhibition between cortical neurons. Recent physiological results, however, have cast doubt on this view: powerful suppression has been observed with masks drifting too rapidly to elicit much of a response in cortex. We show here that the same is true for perceptual masking. From contrast discrimination thresholds, we estimated the cortical response to drifting patterns of various frequencies, and found it greatly reduced above 15-20 Hz. In the same subjects, we measured the strength of masking by the same patterns and found it equally strong for masks drifting slowly (2.7 Hz) as for masks drifting rapidly (27-38 Hz). Fast gratings thus cause strong masking while eliciting weak cortical responses. Our results might be explained by inhibition from cortical neurons that respond to unusually high frequencies, and yet do not make their signals fully available for perceptual judgments. A more parsimonious explanation, however, is that masking does not involve lateral inhibition from cortex. Masking might operate in retina or thalamus, which respond to much higher frequencies than cortex. Masking might also be due to thalamic signals to cortex, perhaps through depression at thalamocortical synapses.