Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shaowu Yang is active.

Publication


Featured researches published by Shaowu Yang.


Journal of Intelligent and Robotic Systems | 2013

An Onboard Monocular Vision System for Autonomous Takeoff, Hovering and Landing of a Micro Aerial Vehicle

Shaowu Yang; Sebastian A. Scherer; Andreas Zell

In this paper, we present an onboard monocular vision system for autonomous takeoff, hovering and landing of a Micro Aerial Vehicle (MAV). Since pose information with metric scale is critical for autonomous flight of a MAV, we present a novel solution to six degrees of freedom (DOF) pose estimation. It is based on a single image of a typical landing pad which consists of the letter “H” surrounded by a circle. A vision algorithm for robust and real-time landing pad recognition is implemented. Then the 5 DOF pose is estimated from the elliptic projection of the circle by using projective geometry. The remaining geometric ambiguity is resolved by incorporating the gravity vector estimated by the inertial measurement unit (IMU). The last degree of freedom pose, yaw angle of the MAV, is estimated from the ellipse fitted from the letter “H”. The efficiency of the presented vision system is demonstrated comprehensively by comparing it to ground truth data provided by a tracking system and by using its pose estimates as control inputs to autonomous flights of a quadrotor.


Journal of Intelligent and Robotic Systems | 2014

Autonomous Landing of MAVs on an Arbitrarily Textured Landing Site Using Onboard Monocular Vision

Shaowu Yang; Sebastian A. Scherer; Konstantin Schauwecker; Andreas Zell

This paper presents a novel solution for micro aerial vehicles (MAVs) to autonomously search for and land on an arbitrary landing site using real-time monocular vision. The autonomous MAV is provided with only one single reference image of the landing site with an unknown size before initiating this task. We extend a well-known monocular visual SLAM algorithm that enables autonomous navigation of the MAV in unknown environments, in order to search for such landing sites. Furthermore, a multi-scale ORB feature based method is implemented and integrated into the SLAM framework for landing site detection. We use a RANSAC-based method to locate the landing site within the map of the SLAM system, taking advantage of those map points associated with the detected landing site. We demonstrate the efficiency of the presented vision system in autonomous flights, both indoor and in challenging outdoor environment.


international conference on unmanned aircraft systems | 2013

A cross-platform comparison of visual marker based approaches for autonomous flight of quadrocopters

Andreas Masselli; Shaowu Yang; Karl Engelbert Wenzel; Andreas Zell

In this paper, we compare three different marker based approaches for six degrees of freedom (6DOF) pose estimation, which can be used for position and attitude control of micro aerial vehicles (MAV). All methods are able to achieve real time pose estimation onboard without assistance of any external metric sensor. Since these methods can be used in various working environments, we compare their performance by carrying out experiments across two different platforms: an AscTec Hummingbird and a Pixhawk quadrocopter. We evaluate each methods accuracy by using an external tracking system and compare the methods with respect to their operating ranges and processing time. We finally compare each methods performance during autonomous takeoff, hovering and landing of a quadrocopter.


international conference on unmanned aircraft systems | 2013

Onboard monocular vision for landing of an MAV on a landing site specified by a single reference image

Shaowu Yang; Sebastian A. Scherer; Konstantin Schauwecker; Andreas Zell

This paper presents a real-time monocular vision solution for MAVs to autonomously search for and land on an arbitrary landing site. The autonomous MAV is provided with only one single reference image of the landing site with an unknown size before initiating this task. To search for such landing sites, we extend a well-known visual SLAM algorithm that enables autonomous navigation of the MAV in unknown environments. A multi-scale ORB feature based method is implemented and integrated into the SLAM framework for landing site detection. We use a RANSAC-based method to locate the landing site within the map of the SLAM system, taking advantage of those map points associated with the detected landing site. We demonstrate the efficiency of the presented vision system in autonomous flight, and compare its accuracy with ground truth data provided by an external tracking system.


Robotics and Autonomous Systems | 2017

Multi-camera visual SLAM for autonomous navigation of micro aerial vehicles

Shaowu Yang; Sebastian Scherer; Xiaodong Yi; Andreas Zell

In this paper, we present a visual simultaneous localization and mapping (SLAM) system which integrates measurements from multiple cameras to achieve robust pose tracking for autonomous navigation of micro aerial vehicles (MAVs) in unknown complex environments. We analyze the iterative optimizations for pose tracking and map refinement of visual SLAM in multi-camera cases. The analysis ensures the soundness and accuracy of each optimization update. A well-known monocular visual SLAM system is extended to utilize two cameras with non-overlapping fields of view (FOVs) in the final implementation. The resulting visual SLAM system enables autonomous navigation of an MAV in complex scenarios. The theory behind this system can easily be extended to multi-camera configurations, when the onboard computational capability allows this. For operations in large-scale environments, we modify the resulting visual SLAM system to be a constant-time robust visual odometry. To form a full visual SLAM system, we further implement an efficient back-end for loop closing. The back-end maintains a keyframe-based global map, which is also used for loop-closure detection. An adaptive-window pose-graph optimization method is proposed to refine keyframe poses of the global map and thus correct pose drift that is inherent in the visual odometry. We demonstrate the efficiency of the proposed visual SLAM system for applications onboard of MAVs in experiments with both autonomous and manual flights. The pose tracking results are compared with ground truth data provided by an external tracking system. A SLAM system integrating measurements from multiple cameras for MAVs is proposed.No overlap in the respective fields of view of the multiple cameras is required.Robust pose-tracking can be achieved in complex environments.Mathematical analysis on the iterative optimizations in visual SLAM is provided.The efficiency of the proposed visual SLAM system is demonstrated onboard of MAVs.


international conference on robotics and automation | 2010

Camera parameters auto-adjusting technique for robust robot vision

Huimin Lu; Hui Zhang; Shaowu Yang; Zhiqiang Zheng

How to make vision system work robustly under dynamic light conditions is still a challenging research focus in computer/robot vision community. In this paper, a novel camera parameters auto-adjusting technique based on image entropy is proposed. Firstly image entropy is defined and its relationship with camera parameters is verified by experiments. Then how to optimize the camera parameters based on image entropy is proposed to make robot vision adaptive to the different light conditions. The algorithm is tested by using the omnidirectional vision in indoor RoboCup Middle Size League environment and the perspective camera in outdoor ordinary environment, and the results show that the method is effective and color constancy to some extent can be achieved.


international conference on information and automation | 2009

Vision-based ball recognition for soccer robots without color classification

Huimin Lu; Hui Zhang; Shaowu Yang; Zhiqiang Zheng

Recognizing the FIFA ball without color classification is a significant ability for RoboCup Middle Size League soccer robots to play competition without the constraint of current color-coded environment. This paper describes a novel method to recognize the ball based on omni-directional vision system and perspective camera without color classification. Firstly the imaging character of the omni-directional vision system is analyzed and the conclusion that the ball on the field can be imaged to be ellipse approximately is derived. Then the arbitrary FIFA ball can be recognized by detecting the ellipse imaged by the ball according to the derivation, and an estimating algorithm for ball speed is integrated to track the ball. For making up the deficiency of omni-directional vision, a perspective camera system is also added to recognize the arbitrary ball near and in front of robot by Sobel filter and Hough Transform algorithm. The experiment results show that the method can recognize the arbitrary FIFA ball effectively and in real-time even in cluttered environments.


Journal of Intelligent and Robotic Systems | 2014

A Cross-Platform Comparison of Visual Marker Based Approaches for Autonomous Flight of Quadrocopters

Andreas Masselli; Shaowu Yang; Karl Engelbert Wenzel; Andreas Zell

In this paper, we compare three different marker based approaches for six degrees of freedom (6DOF) pose estimation, which can be used for position and attitude control of micro aerial vehicles (MAV). All methods are able to achieve real time pose estimation onboard without assistance of any external metric sensor. Since these methods can be used in various working environments, we compare their performance by carrying out experiments across two different platforms: an AscTec Hummingbird and a Pixhawk quadrocopter. We evaluate each method’s accuracy by using an external tracking system and compare the methods with respect to their operating ranges and processing time. We also compare each method’s performance during autonomous takeoff, hovering and landing of a quadrocopter. Finally we show how the methods perform in an outdoor environment. The paper is an extended version of the one with the same title published at the ICUAS Conference 2013.


Advances in intelligent systems and computing | 2016

Robust onboard visual SLAM for autonomous MAVs

Shaowu Yang; Sebastian A. Scherer; Andreas Zell

This paper presents a visual simultaneous localization and mapping (SLAM) system consisting of a robust visual odometry and an efficient back-end with loop-closure detection and pose-graph optimization (PGO). Robustness of the visual odometry is achieved by utilizing dual cameras pointing different directions with no overlap in their respective fields of view mounted on an micro aerial vehicle (MAV). The theory behind this dual-camera visual odometry can be easily extended to applications with multiple cameras. The back-end of the SLAM system maintains a keyframe-based global map, which is used for loop-closure detection. An adaptive-window PGO method is proposed to refine keyframe poses of the global map and thus correct pose drift that is inherent in the visual odometry. The position of each map point is then refined implicitly due to its relative representation to its source keyframe. We demonstrate the efficiency of the proposed visual SLAM algorithm for applications onboard MAVs in experiments with both autonomous and manual flights. The pose tracking results are compared with the ground truth data provided by an external tracking system.


robotics and biomimetics | 2015

Visual SLAM using multiple RGB-D cameras

Shaowu Yang; Xiaodong Yi; Zhiyuan Wang; Yanzhen Wang; Xuejun Yang

In this paper, we present a solution to visual simultaneous localization and mapping (SLAM) using multiple RGB-D cameras. In the SLAM system, we integrate visual and depth measurements from those RGB-D cameras to achieve more robust pose tracking and more detailed environmental mapping in unknown environments. We present the mathematical analysis of the iterative optimizations for pose tracking and map refinement of a RGB-D SLAM system in multi-camera cases. The resulted SLAM system allows configurations of multiple RGB-D cameras with non-overlapping fields of view (FOVs). Furthermore, we provide a SLAM-based semiautomatic method for extrinsic calibration among such cameras. Finally, the experiments in complex indoor scenarios demonstrate the efficiency of the proposed visual SLAM algorithm.

Collaboration


Dive into the Shaowu Yang's collaboration.

Top Co-Authors

Avatar

Andreas Zell

University of Tübingen

View shared research outputs
Top Co-Authors

Avatar

Xiaodong Yi

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Hui Zhang

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Zhiqiang Zheng

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Huimin Lu

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Xuejun Yang

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Fu Li

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Yanzhen Wang

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Zhiyuan Wang

National University of Defense Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge