Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Roland Siegwart is active.

Publication


Featured researches published by Roland Siegwart.


international conference on computer vision | 2011

BRISK: Binary Robust invariant scalable keypoints

Stefan Leutenegger; Margarita Chli; Roland Siegwart

Effective and efficient generation of keypoints from an image is a well-studied problem in the literature and forms the basis of numerous Computer Vision applications. Established leaders in the field are the SIFT and SURF algorithms which exhibit great performance under a variety of image transformations, with SURF in particular considered as the most computationally efficient amongst the high-performance methods to date. In this paper we propose BRISK1, a novel method for keypoint detection, description and matching. A comprehensive evaluation on benchmark datasets reveals BRISKs adaptive, high quality performance as in state-of-the-art algorithms, albeit at a dramatically lower computational cost (an order of magnitude faster than SURF in cases). The key to speed lies in the application of a novel scale-space FAST-based detector in combination with the assembly of a bit-string descriptor from intensity comparisons retrieved by dedicated sampling of each keypoint neighborhood.


intelligent robots and systems | 2007

Full control of a quadrotor

Samir Bouabdallah; Roland Siegwart

The research on autonomous miniature flying robots has intensified considerably thanks to the recent growth of civil and military interest in unmanned aerial vehicles (UAV). This paper summarizes the final results of the modeling and control parts of OS4 project, which focused on design and control of a quadrotor. It introduces a simulation model which takes into account the variation of the aerodynamical coefficients due to vehicle motion. The control parameters found with this model are successfully used on the helicopter without re-tuning. The last part of this paper describes the control approach (integral backstepping) and the scheme we propose for full control of quadrotors (attitude, altitude and position). Finally, the results of autonomous take-off, hover, landing and collision avoidance are presented.


Science | 2007

Social Integration of Robots into Groups of Cockroaches to Control Self-Organized Choices

José Halloy; Grégory Sempo; Gilles Caprari; Colette Rivault; Masoud Asadpour; Fabien Tâche; Imen Saïd; Virginie Durier; Stéphane Canonge; Jean-Marc Amé; Claire Detrain; Nikolaus Correll; Alcherio Martinoli; Francesco Mondada; Roland Siegwart; Jean-Louis Deneubourg

Collective behavior based on self-organization has been shown in group-living animals from insects to vertebrates. These findings have stimulated engineers to investigate approaches for the coordination of autonomous multirobot systems based on self-organization. In this experimental study, we show collective decision-making by mixed groups of cockroaches and socially integrated autonomous robots, leading to shared shelter selection. Individuals, natural or artificial, are perceived as equivalent, and the collective decision emerges from nonlinear feedbacks based on local interactions. Even when in the minority, robots can modulate the collective decision-making process and produce a global pattern not observed in their absence. These results demonstrate the possibility of using intelligent autonomous devices to study and control self-organized behavioral patterns in group-living animals.


international conference on robotics and automation | 2010

Vision based MAV navigation in unknown and unstructured environments

Michael Blösch; Stephan Weiss; Davide Scaramuzza; Roland Siegwart

Within the research on Micro Aerial Vehicles (MAVs), the field on flight control and autonomous mission execution is one of the most active. A crucial point is the localization of the vehicle, which is especially difficult in unknown, GPS-denied environments. This paper presents a novel vision based approach, where the vehicle is localized using a downward looking monocular camera. A state-of-the-art visual SLAM algorithm tracks the pose of the camera, while, simultaneously, building an incremental map of the surrounding region. Based on this pose estimation a LQG/LTR based controller stabilizes the vehicle at a desired setpoint, making simple maneuvers possible like take-off, hovering, setpoint following or landing. Experimental data show that this approach efficiently controls a helicopter while navigating through an unknown and unstructured environment. To the best of our knowledge, this is the first work describing a micro aerial vehicle able to navigate through an unexplored environment (independently of any external aid like GPS or artificial beacons), which uses a single camera as only exteroceptive sensor.


Journal of Field Robotics | 2011

Monocular-SLAM–based navigation for autonomous micro helicopters in GPS-denied environments

Stephan Weiss; Davide Scaramuzza; Roland Siegwart

Autonomous micro aerial vehicles (MAVs) will soon play a major role in tasks such as search and rescue, environment monitoring, surveillance, and inspection. They allow us to easily access environments to which no humans or other vehicles can get access. This reduces the risk for both the people and the environment. For the above applications, it is, however, a requirement that the vehicle is able to navigate without using GPS, or without relying on a preexisting map, or without specific assumptions about the environment. This will allow operations in unstructured, unknown, and GPS-denied environments. We present a novel solution for the task of autonomous navigation of a micro helicopter through a completely unknown environment by using solely a single camera and inertial sensors onboard. Many existing solutions suffer from the problem of drift in the xy plane or from the dependency on a clean GPS signal. The novelty in the here-presented approach is to use a monocular simultaneous localization and mapping (SLAM) framework to stabilize the vehicle in six degrees of freedom. This way, we overcome the problem of both the drift and the GPS dependency. The pose estimated by the visual SLAM algorithm is used in a linear optimal controller that allows us to perform all basic maneuvers such as hovering, set point and trajectory following, vertical takeoff, and landing. All calculations including SLAM and controller are running in real time and online while the helicopter is flying. No offline processing or preprocessing is done. We show real experiments that demonstrate that the vehicle can fly autonomously in an unknown and unstructured environment. To the best of our knowledge, the here-presented work describes the first aerial vehicle that uses onboard monocular vision as a main sensor to navigate through an unknown GPS-denied environment and independently of any external artificial aids.


Autonomous Robots | 2013

Comparing ICP variants on real-world data sets

François Pomerleau; Francis Colas; Roland Siegwart; Stéphane Magnenat

Many modern sensors used for mapping produce 3D point clouds, which are typically registered together using the iterative closest point (ICP) algorithm. Because ICP has many variants whose performances depend on the environment and the sensor, hundreds of variations have been published. However, no comparison frameworks are available, leading to an arduous selection of an appropriate variant for particular experimental conditions. The first contribution of this paper consists of a protocol that allows for a comparison between ICP variants, taking into account a broad range of inputs. The second contribution is an open-source ICP library, which is fast enough to be usable in multiple real-world applications, while being modular enough to ease comparison of multiple solutions. This paper presents two examples of these field applications. The last contribution is the comparison of two baseline ICP variants using data sets that cover a rich variety of environments. Besides demonstrating the need for improved ICP methods for natural, unstructured and information-deprived environments, these baseline variants also provide a solid basis to which novel solutions could be compared. The combination of our protocol, software, and baseline results demonstrate convincingly how open-source software can push forward the research in mapping and navigation.


international conference on computer vision systems | 2006

A Flexible Technique for Accurate Omnidirectional Camera Calibration and Structure from Motion

Davide Scaramuzza; Agostino Martinelli; Roland Siegwart

In this paper, we present a flexible new technique for single viewpoint omnidirectional camera calibration. The proposed method only requires the camera to observe a planar pattern shown at a few different orientations. Either the camera or the planar pattern can be freely moved. No a priori knowledge of the motion is required, nor a specific model of the omnidirectional sensor. The only assumption is that the image projection function can be described by a Taylor series expansion whose coefficients are estimated by solving a two-step least-squares linear minimization problem. To test the proposed technique, we calibrated a panoramic camera having a field of view greater than 200 in the vertical direction, and we obtained very good results. To investigate the accuracy of the calibration, we also used the estimated omni-camera model in a structure from motion experiment. We obtained a 3D metric reconstruction of a scene from two highly distorted omnidirectional images by using image correspondences only. Compared with classical techniques, which rely on a specific parametric model of the omnidirectional camera, the proposed procedure is independent of the sensor, easy to use, and flexible.


The International Journal of Robotics Research | 2015

Keyframe-based visual-inertial odometry using nonlinear optimization

Stefan Leutenegger; Simon Lynen; Michael Bosse; Roland Siegwart; Paul Timothy Furgale

Combining visual and inertial measurements has become popular in mobile robotics, since the two sensing modalities offer complementary characteristics that make them the ideal choice for accurate visual–inertial odometry or simultaneous localization and mapping (SLAM). While historically the problem has been addressed with filtering, advancements in visual estimation suggest that nonlinear optimization offers superior accuracy, while still tractable in complexity thanks to the sparsity of the underlying problem. Taking inspiration from these findings, we formulate a rigorously probabilistic cost function that combines reprojection errors of landmarks and inertial terms. The problem is kept tractable and thus ensuring real-time operation by limiting the optimization to a bounded window of keyframes through marginalization. Keyframes may be spaced in time by arbitrary intervals, while still related by linearized inertial terms. We present evaluation results on complementary datasets recorded with our custom-built stereo visual–inertial hardware that accurately synchronizes accelerometer and gyroscope measurements with imagery. A comparison of both a stereo and monocular version of our algorithm with and without online extrinsics estimation is shown with respect to ground truth. Furthermore, we compare the performance to an implementation of a state-of-the-art stochastic cloning sliding-window filter. This competitive reference implementation performs tightly coupled filtering-based visual–inertial odometry. While our approach declaredly demands more computation, we show its superior performance in terms of accuracy.


international conference on robotics and automation | 2012

Real-time onboard visual-inertial state estimation and self-calibration of MAVs in unknown environments

Stephan Weiss; Markus W. Achtelik; Simon Lynen; Margarita Chli; Roland Siegwart

The combination of visual and inertial sensors has proved to be very popular in robot navigation and, in particular, Micro Aerial Vehicle (MAV) navigation due the flexibility in weight, power consumption and low cost it offers. At the same time, coping with the big latency between inertial and visual measurements and processing images in real-time impose great research challenges. Most modern MAV navigation systems avoid to explicitly tackle this by employing a ground station for off-board processing. In this paper, we propose a navigation algorithm for MAVs equipped with a single camera and an Inertial Measurement Unit (IMU) which is able to run onboard and in real-time. The main focus here is on the proposed speed-estimation module which converts the camera into a metric body-speed sensor using IMU data within an EKF framework. We show how this module can be used for full self-calibration of the sensor suite in real-time. The module is then used both during initialization and as a fall-back solution at tracking failures of a keyframe-based VSLAM module. The latter is based on an existing high-performance algorithm, extended such that it achieves scalable 6DoF pose estimation at constant complexity. Fast onboard speed control is ensured by sole reliance on the optical flow of at least two features in two consecutive camera frames and the corresponding IMU readings. Our nonlinear observability analysis and our real experiments demonstrate that this approach can be used to control a MAV in speed, while we also show results of operation at 40Hz on an onboard Atom computer 1.6 GHz.


IEEE Transactions on Robotics | 2008

Appearance-Guided Monocular Omnidirectional Visual Odometry for Outdoor Ground Vehicles

Davide Scaramuzza; Roland Siegwart

In this paper, we describe a real-time algorithm for computing the ego-motion of a vehicle relative to the road. The algorithm uses as input only those images provided by a single omnidirectional camera mounted on the roof of the vehicle. The front ends of the system are two different trackers. The first one is a homography-based tracker that detects and matches robust scale-invariant features that most likely belong to the ground plane. The second one uses an appearance-based approach and gives high-resolution estimates of the rotation of the vehicle. This planar pose estimation method has been successfully applied to videos from an automotive platform. We give an example of camera trajectory estimated purely from omnidirectional images over a distance of 400 m. For performance evaluation, the estimated path is superimposed onto a satellite image. In the end, we use image mosaicing to obtain a textured 2-D reconstruction of the estimated path.

Collaboration


Dive into the Roland Siegwart's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nicola Tomatis

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Gilles Caprari

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge