Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sebastian A. Scherer is active.

Publication


Featured researches published by Sebastian A. Scherer.


Journal of Intelligent and Robotic Systems | 2013

An Onboard Monocular Vision System for Autonomous Takeoff, Hovering and Landing of a Micro Aerial Vehicle

Shaowu Yang; Sebastian A. Scherer; Andreas Zell

In this paper, we present an onboard monocular vision system for autonomous takeoff, hovering and landing of a Micro Aerial Vehicle (MAV). Since pose information with metric scale is critical for autonomous flight of a MAV, we present a novel solution to six degrees of freedom (DOF) pose estimation. It is based on a single image of a typical landing pad which consists of the letter “H” surrounded by a circle. A vision algorithm for robust and real-time landing pad recognition is implemented. Then the 5 DOF pose is estimated from the elliptic projection of the circle by using projective geometry. The remaining geometric ambiguity is resolved by incorporating the gravity vector estimated by the inertial measurement unit (IMU). The last degree of freedom pose, yaw angle of the MAV, is estimated from the ellipse fitted from the letter “H”. The efficiency of the presented vision system is demonstrated comprehensively by comparing it to ground truth data provided by a tracking system and by using its pose estimates as control inputs to autonomous flights of a quadrotor.


international conference on robotics and automation | 2012

Using depth in visual simultaneous localisation and mapping

Sebastian A. Scherer; Daniel Dube; Andreas Zell

We present a method of utilizing depth information as provided by RGBD sensors for robust real-time visual simultaneous localisation and mapping (SLAM) by augmenting monocular visual SLAM to take into account depth data. This is implemented based on the feely available software “Parallel Tracking and Mapping” by Georg Klein. Our modifications allow PTAM to be used as a 6D visual SLAM system even without any additional information about odometry or from an inertial measurement unit.


Journal of Intelligent and Robotic Systems | 2014

Autonomous Landing of MAVs on an Arbitrarily Textured Landing Site Using Onboard Monocular Vision

Shaowu Yang; Sebastian A. Scherer; Konstantin Schauwecker; Andreas Zell

This paper presents a novel solution for micro aerial vehicles (MAVs) to autonomously search for and land on an arbitrary landing site using real-time monocular vision. The autonomous MAV is provided with only one single reference image of the landing site with an unknown size before initiating this task. We extend a well-known monocular visual SLAM algorithm that enables autonomous navigation of the MAV in unknown environments, in order to search for such landing sites. Furthermore, a multi-scale ORB feature based method is implemented and integrated into the SLAM framework for landing site detection. We use a RANSAC-based method to locate the landing site within the map of the SLAM system, taking advantage of those map points associated with the detected landing site. We demonstrate the efficiency of the presented vision system in autonomous flights, both indoor and in challenging outdoor environment.


AMS | 2012

Markerless Visual Control of a Quad-Rotor Micro Aerial Vehicle by Means of On-Board Stereo Processing

Konstantin Schauwecker; Nan Rosemary Ke; Sebastian A. Scherer; Andreas Zell

We present a quad-rotor micro aerial vehicle (MAV) that is capable to fly andnavigateautonomouslyinanunknownenvironment.Theonlysensoryinputused by the MAV are the imagery from two cameras in a stereo configuration, and data from an inertial measurement unit. We apply a fast sparse stereo matching algorithm incombinationwithavisualodometrymethodbasedonPTAMtoestimatethecurrent MAVpose,whichwerequireforautonomouscontrol.Allprocessingisperformedon a single board computer on-board the MAV. To our knowledge, this is the first MAV thatusesstereovisionfornavigation,anddoesnotrelyonvisualmarkersoroff-board processing. In aflight experiment, the MAV was capable to hover autonomously, and it was able to estimate its current position at a rate of 29Hz and with an average error of only 2.8cm.


intelligent robots and systems | 2013

Efficient onbard RGBD-SLAM for autonomous MAVs

Sebastian A. Scherer; Andreas Zell

We present a computationally inexpensive RGBD-SLAM solution taylored to the application on autonomous MAVs, which enables our MAV to fly in an unknown environment and create a map of its surroundings completely autonomously, with all computations running on its onboard computer. We achieve this by implementing efficient methods for both tracking its current location with respect to a heavily processed previously seen RGBD image (keyframe) and efficient relative registration of a set of keyframes using bundle adjustment with depth constraints as a front-end for pose graph optimization. We prove the accuracy and efficiency of our system based on a public benchmark dataset and demonstrate that the proposed method enables our quadrotor to fly autonomously.


robotics and biomimetics | 2013

Multi-class fruit classification using RGB-D data for indoor robots

Lixing Jiang; Artur Koch; Sebastian A. Scherer; Andreas Zell

In this paper we present an effective and robust system to classify fruits under varying pose and lighting conditions tailored for an object recognition system on a mobile platform. Therefore, we present results on the effectiveness of our underlying segmentation method using RGB as well as depth cues for the specific technical setup of our robot. A combination of RGB low-level visual feature descriptors and 3D geometric properties is used to retrieve complementary object information for the classification task. The unified approach is validated using two multi-class RGB-D fruit categorization datasets. Experimental results compare different feature sets and classification methods and highlight the effectiveness of the proposed features using a Random Forest classifier.


international conference on unmanned aircraft systems | 2013

Onboard monocular vision for landing of an MAV on a landing site specified by a single reference image

Shaowu Yang; Sebastian A. Scherer; Konstantin Schauwecker; Andreas Zell

This paper presents a real-time monocular vision solution for MAVs to autonomously search for and land on an arbitrary landing site. The autonomous MAV is provided with only one single reference image of the landing site with an unknown size before initiating this task. To search for such landing sites, we extend a well-known visual SLAM algorithm that enables autonomous navigation of the MAV in unknown environments. A multi-scale ORB feature based method is implemented and integrated into the SLAM framework for landing site detection. We use a RANSAC-based method to locate the landing site within the map of the SLAM system, taking advantage of those map points associated with the detected landing site. We demonstrate the efficiency of the presented vision system in autonomous flight, and compare its accuracy with ground truth data provided by an external tracking system.


Advances in intelligent systems and computing | 2016

Robust onboard visual SLAM for autonomous MAVs

Shaowu Yang; Sebastian A. Scherer; Andreas Zell

This paper presents a visual simultaneous localization and mapping (SLAM) system consisting of a robust visual odometry and an efficient back-end with loop-closure detection and pose-graph optimization (PGO). Robustness of the visual odometry is achieved by utilizing dual cameras pointing different directions with no overlap in their respective fields of view mounted on an micro aerial vehicle (MAV). The theory behind this dual-camera visual odometry can be easily extended to applications with multiple cameras. The back-end of the SLAM system maintains a keyframe-based global map, which is used for loop-closure detection. An adaptive-window PGO method is proposed to refine keyframe poses of the global map and thus correct pose drift that is inherent in the visual odometry. The position of each map point is then refined implicitly due to its relative representation to its source keyframe. We demonstrate the efficiency of the proposed visual SLAM algorithm for applications onboard MAVs in experiments with both autonomous and manual flights. The pose tracking results are compared with the ground truth data provided by an external tracking system.


european conference on mobile robots | 2013

Loop closure detection using depth images

Sebastian A. Scherer; Alina Kloss; Andreas Zell

We investigate the question whether loop closure detection using depth images is feasible using currently available depth features. For this reason, we collected a benchmark dataset consisting of a total number of 15 logfiles with several loops in various environments, implemented a modular and easily extensible loop closure detector and used this to evaluate the adequacy of state-of-the art depth features on our benchmark dataset. To allow for a fair comparison, we determined the best values for the sometimes large number of user-chosen parameters using a large-scale grid search. Since our benchmark dataset contains both depth and RGB images, we can compare the performance relying on depth features with the performance achieved when using intensity image features.


international conference on unmanned aircraft systems | 2015

DCTAM: Drift-corrected tracking and mapping for autonomous micro aerial vehicles

Sebastian A. Scherer; Shaowu Yang; Andreas Zell

Visual odometry, especially using a forward-looking camera only, can be challenging: It is doomed to fail from time to time and will inevitably drift in the long run. We accept this fact and present methods to cope with and correct the effects for an autonomous MAV using an RGBD camera as its main sensor. We propose correcting drift and failure in visual odometry by combining its pose estimates with information about efficiently detected ground planes in the short term and running a full SLAM back-end incorporating loop closures and ground plane measurements in pose graph optimization. We show that the system presented here achieves accurate results on several instances of the TUM RGB-D benchmark dataset while being computationally efficient enough to enable autonomous.

Collaboration


Dive into the Sebastian A. Scherer's collaboration.

Top Co-Authors

Avatar

Andreas Zell

University of Tübingen

View shared research outputs
Top Co-Authors

Avatar

Shaowu Yang

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alina Kloss

University of Tübingen

View shared research outputs
Top Co-Authors

Avatar

Artur Koch

University of Tübingen

View shared research outputs
Top Co-Authors

Avatar

Daniel Dube

University of Tübingen

View shared research outputs
Top Co-Authors

Avatar

Lixing Jiang

University of Tübingen

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge