Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Roland Brockers is active.

Publication


Featured researches published by Roland Brockers.


international conference on robotics and automation | 2014

Stereo vision-based obstacle avoidance for micro air vehicles using disparity space

Larry H. Matthies; Roland Brockers; Yoshiaki Kuwata; Stephan Weiss

We address obstacle avoidance for outdoor flight of micro air vehicles. The highly textured nature of outdoor scenes enables camera-based perception, which will scale to very small size, weight, and power with very wide, two-axis field of regard. In this paper, we use forward-looking stereo cameras for obstacle detection and a downward-looking camera as an input to state estimation. For obstacle representation, we use image space with the stereo disparity map itself. We show that a C-space-like obstacle expansion can be done with this representation and that collision checking can be done by projecting candidate 3-D trajectories into image space and performing a z-buffer-like operation with the disparity map. This approach is very efficient in memory and computing time. We do motion planning and trajectory generation with an adaptation of a closed-loop RRT planner to quadrotor dynamics and full 3D search. We validate the performance of the system with Monte Carlo simulations in virtual worlds and flight tests of a real quadrotor through a grove of trees. The approach is designed to support scalability to high speed flight and has numerous possible generalizations to use other polar or hybrid polar/Cartesian representations and to fuse data from additional sensors, such as peripheral optical flow or radar.


computer vision and pattern recognition | 2014

Towards Autonomous Navigation of Miniature UAV

Roland Brockers; Martin Hummenberger; Stephan Weiss; Larry H. Matthies

Micro air vehicles such as miniature rotorcrafts require high-precision and fast localization updates for their control, but cannot carry large payloads. Therefore, only small and light-weight sensors and processing units can be deployed on such platforms, favoring vision-based solutions that use light weight cameras and run on small embedded computing platforms. In this paper, we propose a navigation framework to provide a small quadrotor UAV with accurate state estimation for high speed control including 6DoF pose and sensor self-calibration. Our method allows very fast deployment without prior calibration procedures literally rendering the vehicle a throw-and-go platform. Additionally, we demonstrate hazard-avoiding autonomous landing to showcase a high-level navigation capability that relies on the low-level pose estimation results and is executed on the same embedded platform. We explain our hardware-specific implementation on a 12g processing unit and show real-world end-to-end results.


international conference on robotics and automation | 2011

Feature and pose constrained visual Aided Inertial Navigation for computationally constrained aerial vehicles

Brian C. Williams; Nicolas Hudson; Brent E. Tweddle; Roland Brockers; Larry H. Matthies

A Feature and Pose Constrained Extended Kalman Filter (FPC-EKF) is developed for highly dynamic computationally constrained micro aerial vehicles. Vehicle localization is achieved using only a low performance inertial measurement unit and a single camera. The FPC-EKF framework augments the vehicles state with both previous vehicle poses and critical environmental features, including vertical edges. This filter framework efficiently incorporates measurements from hundreds of opportunistic visual features to constrain the motion estimate, while allowing navigating and sustained tracking with respect to a few persistent features. In addition, vertical features in the environment are opportunistically used to provide global attitude references. Accurate pose estimation is demonstrated on a sequence including fast traversing, where visual features enter and exit the field-of-view quickly, as well as hover and ingress maneuvers where drift free navigation is achieved with respect to the environment.


intelligent robots and systems | 2013

4DoF drift free navigation using inertial cues and optical flow

Stephan Weiss; Roland Brockers; Larry H. Matthies

In this paper, we describe a novel approach in fusing optical flow with inertial cues (3D acceleration and 3D angular velocities) in order to navigate a Micro Aerial Vehicle (MAV) drift free in 4DoF and metric velocity. Our approach only requires two consecutive images with a minimum of three feature matches. It does not require any (point) map nor any type of feature history. Thus it is an inherently failsafe approach that is immune to map and feature-track failures. With these minimal requirements we show in real experiments that the system is able to navigate drift free in all angles including yaw, in one metric position axis, and in 3D metric velocity. Furthermore, it is a power-on-and-go system able to online self-calibrate the inertial biases, the visual scale and the full 6DoF extrinsic transformation parameters between camera and IMU.


Proceedings of SPIE | 2012

Fully self-contained vision-aided navigation and landing of a micro air vehicle independent from external sensor inputs

Roland Brockers; Sara Susca; David Q. Zhu; Larry H. Matthies

Direct-lift micro air vehicles have important applications in reconnaissance. In order to conduct persistent surveillance in urban environments, it is essential that these systems can perform autonomous landing maneuvers on elevated surfaces that provide high vantage points without the help of any external sensor and with a fully contained on-board software solution. In this paper, we present a micro air vehicle that uses vision feedback from a single down looking camera to navigate autonomously and detect an elevated landing platform as a surrogate for a roof top. Our method requires no special preparation (labels or markers) of the landing location. Rather, leveraging the planar character of urban structure, the landing platform detection system uses a planar homography decomposition to detect landing targets and produce approach waypoints for autonomous landing. The vehicle control algorithm uses a Kalman filter based approach for pose estimation to fuse visual SLAM (PTAM) position estimates with IMU data to correct for high latency SLAM inputs and to increase the position estimate update rate in order to improve control stability. Scale recovery is achieved using inputs from a sonar altimeter. In experimental runs, we demonstrate a real-time implementation running on-board a micro aerial vehicle that is fully self-contained and independent from any external sensor information. With this method, the vehicle is able to search autonomously for a landing location and perform precision landing maneuvers on the detected targets.


computer analysis of images and patterns | 2009

Cooperative Stereo Matching with Color-Based Adaptive Local Support

Roland Brockers

Color processing imposes a new constraint on stereo vision algorithms: The assumption of constant color on object surfaces used to align local correlation windows with object boundaries has improved the accuracy of recent window based stereo algorithms significantly. While several algorithms have been presented that work with adaptive correlation windows defined by color similarity, only a few approaches use color based grouping to optimize initially computed traditional matching scores. This paper introduces the concept of color-dependent adaptive support weights to the definition of local support areas in cooperative stereo methods to improve the accuracy of depth estimation at object borders.


Proceedings of SPIE | 2011

Autonomous landing and ingress of micro-air-vehicles in urban environments based on monocular vision.

Roland Brockers; Patrick Bouffard; Jeremy Ma; Larry H. Matthies; Claire J. Tomlin

Unmanned micro air vehicles (MAVs) will play an important role in future reconnaissance and search and rescue applications. In order to conduct persistent surveillance and to conserve energy, MAVs need the ability to land, and they need the ability to enter (ingress) buildings and other structures to conduct reconnaissance. To be safe and practical under a wide range of environmental conditions, landing and ingress maneuvers must be autonomous, using real-time, onboard sensor feedback. To address these key behaviors, we present a novel method for vision-based autonomous MAV landing and ingress using a single camera for two urban scenarios: landing on an elevated surface, representative of a rooftop, and ingress through a rectangular opening, representative of a door or window. Real-world scenarios will not include special navigation markers, so we rely on tracking arbitrary scene features; however, we do currently exploit planarity of the scene. Our vision system uses a planar homography decomposition to detect navigation targets and to produce approach waypoints as inputs to the vehicle control algorithm. Scene perception, planning, and control run onboard in real-time; at present we obtain aircraft position knowledge from an external motion capture system, but we expect to replace this in the near future with a fully self-contained, onboard, vision-aided state estimation algorithm. We demonstrate autonomous vision-based landing and ingress target detection with two different quadrotor MAV platforms. To our knowledge, this is the first demonstration of onboard, vision-based autonomous landing and ingress algorithms that do not use special purpose scene markers to identify the destination.


robotics science and systems | 2014

Vision-based Landing Site Evaluation and Trajectory Generation Toward Rooftop Landing

Vishnu R. Desaraju; Nathan Michael; Martin Humenberger; Roland Brockers; Stephan Weiss; Larry H. Matthies

Autonomous landing is an essential function for micro air vehicles (MAVs) for many scenarios. We pursue an active perception strategy that enables MAVs with limited onboard sensing and processing capabilities to concurrently assess feasible rooftop landing sites with a vision-based perception system while generating trajectories that balance continued landing site assessment and the requirement to provide visual monitoring of an interest point. The contributions of the work are twofold: (1) a perception system that employs a dense motion stereo approach that determines the 3D model of the captured scene without the need of geo-referenced images, scene geometry constraints, or external navigation aids; and (2) an online trajectory generation approach that balances the need to concurrently explore available rooftop vantages of an interest point while ensuring confidence in the landing site suitability by considering the impact of landing site uncertainty as assessed by the perception system. Simulation and experimental evaluation of the performance of the perception and trajectory generation methodologies are analyzed independently and jointly in order to establish the efficacy of the proposed approach.


international conference on robotics and automation | 2016

Self-calibrating multi-sensor fusion with probabilistic measurement validation for seamless sensor switching on a UAV

Karol Hausman; Stephan Weiss; Roland Brockers; Larry H. Matthies; Gaurav S. Sukhatme

Fusing data from multiple sensors on-board a mobile platform can significantly augment its state estimation abilities and enable autonomous traversals of different domains by adapting to changing signal availabilities. However, due to the need for accurate calibration and initialization of the sensor ensemble as well as coping with erroneous measurements that are acquired at different rates with various delays, multi-sensor fusion still remains a challenge. In this paper, we introduce a novel multi-sensor fusion approach for agile aerial vehicles that allows for measurement validation and seamless switching between sensors based on statistical signal quality analysis. Moreover, it is capable of self-initialization of its extrinsic sensor states. These initialized states are maintained in the framework such that the system can continuously self-calibrate. We implement this framework on-board a small aerial vehicle and demonstrate the effectiveness of the above capabilities on real data. As an example, we fuse GPS data, ultra-wideband (UWB) range measurements, visual pose estimates, and IMU data. Our experiments demonstrate that our system is able to seamlessly filter and switch between different sensors modalities during run time.


Proceedings of SPIE | 2014

Micro air vehicle autonomous obstacle avoidance from stereo-vision

Roland Brockers; Yoshiaki Kuwata; Stephan Weiss; Lawrence Matthies

We introduce a new approach for on-board autonomous obstacle avoidance for micro air vehicles flying outdoors in close proximity to structure. Our approach uses inverse-range, polar-perspective stereo-disparity maps for obstacle detection and representation, and deploys a closed-loop RRT planner that considers flight dynamics for trajectory generation. While motion planning is executed in 3D space, we reduce collision checking to a fast z-buffer-like operation in disparity space, which allows for significant speed-up compared to full 3d methods. Evaluations in simulation illustrate the robustness of our approach, whereas real world flights under tree canopy demonstrate the potential of the approach.

Collaboration


Dive into the Roland Brockers's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anthony T. Fragoso

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Martin Humenberger

Austrian Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Cevahir Cigla

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jeremy Ma

Jet Propulsion Laboratory

View shared research outputs
Top Co-Authors

Avatar

Nathan Michael

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yoshiaki Kuwata

Jet Propulsion Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge