Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sebastian Scherer is active.

Publication


Featured researches published by Sebastian Scherer.


intelligent robots and systems | 2015

VoxNet: A 3D Convolutional Neural Network for real-time object recognition

Daniel Maturana; Sebastian Scherer

Robust object recognition is a crucial skill for robots operating autonomously in real world environments. Range sensors such as LiDAR and RGBD cameras are increasingly found in modern robotic systems, providing a rich source of 3D information that can aid in this task. However, many current systems do not fully utilize this information and have trouble efficiently dealing with large amounts of point cloud data. In this paper, we propose VoxNet, an architecture to tackle this problem by integrating a volumetric Occupancy Grid representation with a supervised 3D Convolutional Neural Network (3D CNN). We evaluate our approach on publicly available benchmarks using LiDAR, RGBD, and CAD data. VoxNet achieves accuracy beyond the state of the art while labeling hundreds of instances per second.


The International Journal of Robotics Research | 2008

Flying Fast and Low Among Obstacles: Methodology and Experiments

Sebastian Scherer; Sanjiv Singh; Lyle Chamberlain; Mike Elgersma

Safe autonomous flight is essential for widespread acceptance of aircraft that must fly close to the ground. We have developed a method of collision avoidance that can be used in three dimensions in much the same way as autonomous ground vehicles that navigate over unexplored terrain. Safe navigation is accomplished by a combination of online environmental sensing, path planning and collision avoidance. Here we outline our methodology and report results with an autonomous helicopter that operates at low elevations in uncharted environments, some of which are densely populated with obstacles such as buildings, trees and wires. We have recently completed over 700 successful runs in which the helicopter traveled between coarsely specified waypoints separated by hundreds of meters, at speeds of up to 10 m s—1 at elevations of 5—11 m above ground level. The helicopter safely avoids large objects such as buildings and trees but also wires as thin as 6 mm. We believe this represents the first time an air vehicle has traveled this fast so close to obstacles. The collision avoidance method learns to avoid obstacles by observing the performance of a human operator.


international conference on robotics and automation | 2007

Flying Fast and Low Among Obstacles

Sebastian Scherer; Sanjiv Singh; Lyle Chamberlain; Srikanth Saripalli

Safe autonomous flight is essential for widespread acceptance of aircraft that must fly close to the ground. We have developed a method of collision avoidance that can be used in three dimensions in much the same way as autonomous ground vehicles that navigate over unexplored terrain. Safe navigation is accomplished by a combination of online environmental sensing, path planning and collision avoidance. Here we report results with an autonomous helicopter that operates at low elevations in uncharted environments some of which are densely populated with obstacles such as buildings, trees and wires. We have recently completed over 1000 successful runs in which the helicopter traveled between coarsely specified waypoints separated by hundreds of meters, at speeds up to 10 meters/sec at elevations of 5-10 meters above ground level. The helicopter safely avoids large objects like buildings and trees but also wires as thin as 6 mm. We believe this represents the first time an air vehicle has traveled this fast so close to obstacles. Here we focus on the collision avoidance method that learns to avoid obstacles by observing the performance of a human operator.


international conference on robotics and automation | 2015

3D Convolutional Neural Networks for landing zone detection from LiDAR

Daniel Maturana; Sebastian Scherer

We present a system for the detection of small and potentially obscured obstacles in vegetated terrain. The key novelty of this system is the coupling of a volumetric occupancy map with a 3D Convolutional Neural Network (CNN), which to the best of our knowledge has not been previously done. This architecture allows us to train an extremely efficient and highly accurate system for detection tasks from raw occupancy data. We apply this method to the problem of detecting safe landing zones for autonomous helicopters from LiDAR point clouds. Current methods for this problem rely on heuristic rules and use simple geometric features. These heuristics break down in the presence of low vegetation, as they do not distinguish between vegetation that may be landed on and solid objects that should be avoided. We evaluate the system with a combination of real and synthetic range data. We show our system outperforms various benchmarks, including a system integrating various hand-crafted point cloud features from the literature.


international conference on robotics and automation | 2013

First results in detecting and avoiding frontal obstacles from a monocular camera for micro unmanned aerial vehicles

Tomoyuki Mori; Sebastian Scherer

Obstacle avoidance is desirable for lightweight micro aerial vehicles and is a challenging problem since the payload constraints only permit monocular cameras and obstacles cannot be directly observed. Depth can however be inferred based on various cues in the image. Prior work has examined optical flow, and perspective cues, however these methods cannot handle frontal obstacles well. In this paper we examine the problem of detecting obstacles right in front of the vehicle. We developed a method to detect relative size changes of image patches that is able to detect size changes in the absence of optical flow. The method uses SURF feature matches in combination with template matching to compare relative obstacle sizes with different image spacing. We present results from our algorithm in autonomous flight tests on a small quadrotor. We are able to detect obstacles with a frame-to-frame enlargement of 120% with a high confidence and confirmed our algorithm in 20 successful flight experiments. In future work, we will improve the control algorithms to avoid more complicated obstacle configurations.


Robotics and Autonomous Systems | 2012

Autonomous landing at unprepared sites by a full-scale helicopter

Sebastian Scherer; Lyle Chamberlain; Sanjiv Singh

Helicopters are valuable since they can land at unprepared sites; however, current unmanned helicopters are unable to select or validate landing zones (LZs) and approach paths. For operation in unknown terrain it is necessary to assess the safety of a LZ. In this paper, we describe a lidar-based perception system that enables a full-scale autonomous helicopter to identify and land in previously unmapped terrain with no human input. We describe the problem, real-time algorithms, perception hardware, and results. Our approach has extended the state of the art in terrain assessment by incorporating not only plane fitting, but by also considering factors such as terrain/skid interaction, rotor and tail clearance, wind direction, clear approach/abort paths, and ground paths. In results from urban and natural environments we were able to successfully classify LZs from point cloud maps. We also present results from 8 successful landing experiments with varying ground clutter and approach directions. The helicopter selected its own landing site, approaches, and then proceeds to land. To our knowledge, these experiments were the first demonstration of a full-scale autonomous helicopter that selected its own landing zones and landed.


international conference on robotics and automation | 2013

Sparse Tangential Network (SPARTAN): Motion planning for micro aerial vehicles

Hugh Cover; Sanjiban Choudhury; Sebastian Scherer; Sanjiv Singh

Micro aerial vehicles operating outdoors must be able to maneuver through both dense vegetation and across empty fields. Existing approaches do not exploit the nature of such an environment. We have designed an algorithm which plans rapidly through free space and is efficiently guided around obstacles. In this paper we present SPARTAN (Sparse Tangential Network) as an approach to create a sparsely connected graph across a tangential surface around obstacles. We find that SPARTAN can navigate a vehicle autonomously through an outdoor environment producing plans 172 times faster than the state of the art (RRT*). As a result SPARTAN can reliably deliver safe plans, with low latency, using the limited computational resources of a lightweight aerial vehicle.


Journal of Field Robotics | 2006

Learning Obstacle Avoidance Parameters from Operator Behavior

Bradley Hamner; Sanjiv Singh; Sebastian Scherer

This paper concerns an outdoor mobile robot that learns to avoid collisions by observing a human driver operate a vehicle equipped with sensors that continuously produce a map of the local environment. We have implemented steering control that models human behavior in trying to avoid obstacles while trying to follow a desired path. Here we present the formulation for this control system and its independent parameters and then show how these parameters can be automatically estimated by observing a human driver. We also present results from operation on an autonomous robot as well as in simulation, and compare the results from our method to another commonly used learning method. We find that the proposed method generalizes well and is capable of learning from a small number of samples.


intelligent robots and systems | 2011

Perception for a river mapping robot

Andrew Chambers; Supreeth Achar; Stephen Nuske; Joern Rehder; Bernd Kitt; Lyle Chamberlain; Justin Haines; Sebastian Scherer; Sanjiv Singh

Rivers with heavy vegetation are hard to map from the air. Here we consider the task of mapping their course and the vegetation along the shores with the specific intent of determining river width and canopy height. A complication in such riverine environments is that only intermittent GPS may be available depending on the thickness of the surrounding canopy. We present a multimodal perception system to be used for the active exploration and mapping of a river from a small rotorcraft flying a few meters above the water. We describe three key components that use computer vision, laser scanning, and inertial sensing to follow the river without the use of a prior map, estimate motion of the rotorcraft, ensure collision-free operation, and create a three dimensional representation of the riverine environment. While the ability to fly simplifies the navigation problem, it also introduces an additional set of constraints in terms of size, weight and power. Hence, our solutions are cognizant of the need to perform multi-kilometer missions with a small payload. We present experimental results along a 2km loop of river using a surrogate system.


international conference on robotics and automation | 2013

Infrastructure-free shipdeck tracking for autonomous landing

Sankalp Arora; Sezal Jain; Sebastian Scherer; Stephen Nuske; Lyle Chamberlain; Sanjiv Singh

Shipdeck landing is one of the most challenging tasks for a rotorcraft. Current autonomous rotorcraft use shipdeck mounted transponders to measure the relative pose of the vehicle to the landing pad. This tracking system is not only expensive but renders an unequipped ship unlandable. We address the challenge of tracking a shipdeck without additional infrastructure on the deck. We present two methods based on video and lidar that are able to track the shipdeck starting at a considerable distance from the ship. This redundant sensor design enables us to have two independent tracking systems. We show the results of the tracking algorithms in three different environments - field testing results on actual helicopter flights, in simulation with a moving shipdeck for lidar based tracking and in laboratory using an occluded, and, moving scaled model of a landing deck for camera based tracking. The complimentary modalities allow shipdeck tracking under varying conditions.

Collaboration


Dive into the Sebastian Scherer's collaboration.

Top Co-Authors

Avatar

Sanjiv Singh

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lyle Chamberlain

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Sankalp Arora

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Daniel Maturana

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Stephen Nuske

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Shichao Yang

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Sezal Jain

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Andrew Chambers

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Hugh Cover

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge