Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Laurent Kneip is active.

Publication


Featured researches published by Laurent Kneip.


computer vision and pattern recognition | 2011

A novel parametrization of the perspective-three-point problem for a direct computation of absolute camera position and orientation

Laurent Kneip; Davide Scaramuzza; Roland Siegwart

The Perspective-Three-Point (P3P) problem aims at determining the position and orientation of the camera in the world reference frame from three 2D-3D point correspondences. This problem is known to provide up to four solutions that can then be disambiguated using a fourth point. All existing solutions attempt to first solve for the position of the points in the camera reference frame, and then compute the position and orientation of the camera in the world frame, which alignes the two point sets. In contrast, in this paper we propose a novel closed-form solution to the P3P problem, which computes the aligning transformation directly in a single stage, without the intermediate derivation of the points in the camera frame. This is made possible by introducing intermediate camera and world reference frames, and expressing their relative position and orientation using only two parameters. The projection of a world point into the parametrized camera pose then leads to two conditions and finally a quartic equation for finding up to four solutions for the parameter pair. A subsequent backsubstitution directly leads to the corresponding camera poses with respect to the world reference frame. We show that the proposed algorithm offers accuracy and precision comparable to a popular, standard, state-of-the-art approach but at much lower computational cost (15 times faster). Furthermore, it provides improved numerical stability and is less affected by degenerate configurations of the selected world points. The superior computational efficiency is particularly suitable for any RANSAC-outlier-rejection step, which is always recommended before applying PnP or non-linear optimization of the final solution.


Journal of Field Robotics | 2013

Monocular Vision for Long-term Micro Aerial Vehicle State Estimation: A Compendium

Stephan Weiss; Markus W. Achtelik; Simon Lynen; Michael Achtelik; Laurent Kneip; Margarita Chli; Roland Siegwart

The recent technological advances in Micro Aerial Vehicles (MAVs) have triggered great interest in the robotics community, as their deployability in missions of surveillance and reconnaissance has now become a realistic prospect. The state of the art, however, still lacks solutions that can work for a long duration in large, unknown, and GPS-denied environments. Here, we present our visual pipeline and MAV state-estimation framework, which uses feeds from a monocular camera and an Inertial Measurement Unit (IMU) to achieve real-time and onboard autonomous flight in general and realistic scenarios. The challenge lies in dealing with the power and weight restrictions onboard a MAV while providing the robustness necessary in real and long-term missions. This article provides a concise summary of our work on achieving the first onboard vision-based power-on-and-go system for autonomous MAV flights. We discuss our insights on the lessons learned throughout the different stages of this research, from the conception of the idea to the thorough theoretical analysis of the proposed framework and, finally, the real-world implementation and deployment. Looking into the onboard estimation of monocular visual odometry, the sensor fusion strategy, the state estimation and self-calibration of the system, and finally some implementation issues, the reader is guided through the different modules comprising our framework. The validity and power of this framework are illustrated via a comprehensive set of experiments in a large outdoor mission, demonstrating successful operation over flights of more than 360 m trajectory and 70 m altitude change.


IEEE Robotics & Automation Magazine | 2014

Vision-Controlled Micro Flying Robots: From System Design to Autonomous Navigation and Mapping in GPS-Denied Environments

Davide Scaramuzza; Michael Achtelik; Lefteris Doitsidis; Friedrich Fraundorfer; Elias B. Kosmatopoulos; Agostino Martinelli; Markus W. Achtelik; Margarita Chli; Savvas A. Chatzichristofis; Laurent Kneip; Daniel Gurdan; Lionel Heng; Gim Hee Lee; Simon Lynen; Lorenz Meier; Marc Pollefeys; Alessandro Renzaglia; Roland Siegwart; Jan Stumpf; Petri Tanskanen; Chiara Troiani; Stephan Weiss

Autonomous microhelicopters will soon play a major role in tasks like search and rescue, environment monitoring, security surveillance, and inspection. If they are further realized in small scale, they can also be used in narrow outdoor and indoor environments and represent only a limited risk for people. However, for such operations, navigating based only on global positioning system (GPS) information is not sufficient. Fully autonomous operation in cities or other dense environments requires microhelicopters to fly at low altitudes, where GPS signals are often shadowed, or indoors and to actively explore unknown environments while avoiding collisions and creating maps. This involves a number of challenges on all levels of helicopter design, perception, actuation, control, and navigation, which still have to be solved. The Swarm of Micro Flying Robots (SFLY) project was a European Union-funded project with the goal of creating a swarm of vision-controlled microaerial vehicles (MAVs) capable of autonomous navigation, three-dimensional (3-D) mapping, and optimal surveillance coverage in GPS-denied environments. The SFLY MAVs do not rely on remote control, radio beacons, or motion-capture systems but can fly all by themselves using only a single onboard camera and an inertial measurement unit (IMU). This article describes the technical challenges that have been faced and the results achieved from hardware design and embedded programming to vision-based navigation and mapping, with an overview of how all the modules work and how they have been integrated into the final system. Code, data sets, and videos are publicly available to the robotics community. Experimental results demonstrating three MAVs navigating autonomously in an unknown GPS-denied environment and performing 3-D mapping and optimal surveillance coverage are presented.


international conference on robotics and automation | 2009

Characterization of the compact Hokuyo URG-04LX 2D laser range scanner

Laurent Kneip; Fabien Tache; Gilles Caprari; Roland Siegwart

This paper presents a detailed characterization of the Hokuyo URG-04LX 2D laser range finder. While the sensor specifications only provide a rough estimation of the sensor accuracy, the present work analyzes issues such as time drift effects and dependencies on distance, target properties (color, brightness and material) as well as incidence angle. Since the sensor is intended to be used for measurements of a tubelike environment on an inspection robot, the characterization is extended by investigating the influence of the sensor orientation and dependency on lighting conditions. The sensor characteristics are compared to those of the Sick LMS 200 which is commonly used in robotic applications when size and weight are not critical constraints. The results show that the sensor accuracy is strongly depending on the target properties (color, brightness, material) and that it is consequently difficult to establish a calibration model. The paper also identifies cases for which the sensor returns faulty measurements, mainly when the surface has low reflectivity (dark surfaces, foam) or for high incidence angles on shiny surfaces. On the other hand, the repeatability of the sensor seems to be competitive with the LMS 200.


british machine vision conference | 2011

Robust Real-Time Visual Odometry with a Single Camera and an IMU

Laurent Kneip; Margarita Chli; Roland Siegwart

The increasing demand for real-time high-precision Visual Odometry systems as part of navigation and localization tasks has recently been driving research towards more versatile and scalable solutions. In this paper, we present a novel framework for combining the merits of inertial and visual data from a monocular camera to accumulate estimates of local motion incrementally and reliably reconstruct the trajectory traversed. We demonstrate the robustness and efficiency of our methodology in a scenario with challenging camera dynamics, and present a comprehensive evaluation against ground-truth data.


intelligent robots and systems | 2013

Collaborative monocular SLAM with multiple Micro Aerial Vehicles

Christian Forster; Simon Lynen; Laurent Kneip; Davide Scaramuzza

This paper presents a framework for collaborative localization and mapping with multiple Micro Aerial Vehicles (MAVs) in unknown environments. Each MAV estimates its motion individually using an onboard, monocular visual odometry algorithm. The system of MAVs acts as a distributed preprocessor that streams only features of selected keyframes and relative-pose estimates to a centralized ground station. The ground station creates an individual map for each MAV and merges them together whenever it detects overlaps. This allows the MAVs to express their position in a common, global coordinate frame. The key to real-time performance is the design of data-structures and processes that allow multiple threads to concurrently read and modify the same map. The presented framework is tested in both indoor and outdoor environments with up to three MAVs. To the best of our knowledge, this is the first work on real-time collaborative monocular SLAM, which has also been applied to MAVs.


Journal of Intelligent and Robotic Systems | 2011

Intuitive 3D Maps for MAV Terrain Exploration and Obstacle Avoidance

Stephan Weiss; Markus W. Achtelik; Laurent Kneip; Davide Scaramuzza; Roland Siegwart

Recent development showed that Micro Aerial Vehicles (MAVs) are nowadays capable of autonomously take off at one point and land at another using only one single camera as exteroceptive sensor. During the flight and landing phase the MAV and user have, however, little knowledge about the whole terrain and potential obstacles. In this paper we show a new solution for a real-time dense 3D terrain reconstruction. This can be used for efficient unmanned MAV terrain exploration and yields a solid base for standard autonomous obstacle avoidance algorithms and path planners. Our approach is based on a textured 3D mesh on sparse 3D point features of the scene. We use the same feature points to localize and control the vehicle in the 3D space as we do for building the 3D terrain reconstruction mesh. This enables us to reconstruct the terrain without significant additional cost and thus in real-time. Experiments show that the MAV is easily guided through an unknown, GPS denied environment. Obstacles are recognized in the iteratively built 3D terrain reconstruction and are thus well avoided.


intelligent robots and systems | 2012

Visual-inertial SLAM for a small helicopter in large outdoor environments

Markus W. Achtelik; Simon Lynen; Stephan Weiss; Laurent Kneip; Margarita Chli; Roland Siegwart

In this video, we present our latest results towards fully autonomous flights with a small helicopter. Using a monocular camera as the only exteroceptive sensor, we fuse inertial measurements to achieve a self-calibrating power-on-and-go system, able to perform autonomous flights in previously unknown, large, outdoor spaces. Our framework achieves Simultaneous Localization And Mapping (SLAM) with previously unseen robustness in onboard aerial navigation for small platforms with natural restrictions on weight and computational power. We demonstrate successful operation in flights with altitude between 0.2-70 m, trajectories with 350 m length, as well as dynamic maneuvers with track speed of 2 m/s. All flights shown are performed autonomously using vision in the loop, with only high-level waypoints given as directions.


international conference on robotics and automation | 2011

Closed-form solution for absolute scale velocity determination combining inertial measurements and a single feature correspondence

Laurent Kneip; Agostino Martinelli; Stephan Weiss; Davide Scaramuzza; Roland Siegwart

This paper presents a closed-form solution for metric velocity estimation of a single camera using inertial measurements. It combines accelerometer and attitude measurements with feature observations in order to compute both the distance to the feature and the speed of the camera inside the camera frame. Notably, we show that this is possible by just using three consecutive camera positions and a single feature correspondence. Our approach represents a compact linear and multirate solution for estimating complementary information to regular essential matrix computation, namely the scale of the problem. The algorithm is thoroughly validated on simulated and real data and conditions for good quality of the results are identified.


european conference on computer vision | 2014

UPnP: An Optimal O(n) Solution to the Absolute Pose Problem with Universal Applicability

Laurent Kneip; Hongdong Li; Yongduek Seo

A large number of absolute pose algorithms have been presented in the literature. Common performance criteria are computational complexity, geometric optimality, global optimality, structural degeneracies, and the number of solutions. The ability to handle minimal sets of correspondences, resulting solution multiplicity, and generalized cameras are further desirable properties. This paper presents the first PnP solution that unifies all the above desirable properties within a single algorithm. We compare our result to state-of-the-art minimal, non-minimal, central, and non-central PnP algorithms, and demonstrate universal applicability, competitive noise resilience, and superior computational efficiency. Our algorithm is called Unified PnP (UPnP).

Collaboration


Dive into the Laurent Kneip's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hongdong Li

Australian National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lars Petersson

Australian National University

View shared research outputs
Researchain Logo
Decentralizing Knowledge