Junhao Xiao
National University of Defense Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Junhao Xiao.
robot soccer world cup | 2009
Huimin Lu; Hui Zhang; Junhao Xiao; Fei Liu; Zhiqiang Zheng
Recognizing the arbitrary standard FIFA ball is a significant ability for soccer robots to play competition without the constraint of current color-coded environment. This paper describes a novel method to recognize arbitrary FIFA ball based on omni-directional vision system. Firstly the omni-directional vision system and its calibration for distance map are introduced, and the conclusion that the ball on the field can be imaged to be ellipse approximately is derived by analyzing the imaging character. Then the arbitrary FIFA ball is detected by using image processing algorithm to search the ellipse imaged by the ball according to the derivation. In the actual application, a simple but effective ball tracking algorithm is also used to reduce the running time needed after the ball has been recognized globally. The experiment results show that the novel method can recognize the arbitrary FIFA ball effectively and in real-time.
Industrial Robot-an International Journal | 2016
Dan Xiong; Junhao Xiao; Huimin Lu; Zhiwen Zeng; Qinghua Yu; Kaihong Huang; Xiaodong Yi; Zhiqiang Zheng
Purpose – The purpose of this paper is to design intelligent robots operating in such dynamic environments like the RoboCup Middle-Size League (MSL). In the RoboCup MSL, two teams of five autonomous robots play on an 18- × 12-m field. Equipped with sensors and on-board computers, each robot should be able to perceive the environment, make decision and control itself to play the soccer game autonomously. Design/methodology/approach – This paper presents the design of our soccer robots, participating in RoboCup MSL. The mechanical platform, electrical architecture and software framework are discussed separately. The mechanical platform is designed modularly, so easy maintainability is achieved; the electronic architecture is built on industrial standards using PC-based control technique, which results in high robustness and reliability during the intensive and fierce MSL games; the software is developed upon the open-source Robot Operating System (ROS); thus, the advantages of ROS such as modularity, portab...
robotics and biomimetics | 2015
Weijia Yao; Wei Dai; Junhao Xiao; Huimin Lu; Zhiqiang Zheng
Originated from Robot World Cup Middle Size League (RoboCup MSL), this paper discusses the design and implementation of a robot soccer simulation system based on ROS and Gazebo. Its aim is to test multi-robot collaboration algorithms. After building the Gazebo models, a model plugin is written to realize the robots basic motions including omnidirectional locomotion, ball-dribbling and ball-kicking. To integrate the model plugin with the real robot code, ROS nodes related to low-level controllers are replaced with the model plugin. In addition, nodes related to behavior control and global information processing are modified to corresponding model plugins as well. Finally, multiple robot models are spawned into a simulation world and collaborate with others. We have already used the simulation system to successfully test and debug the multi-robot collaboration algorithms for MSL. Furthermore, after minor modifications, it can also be applied to the research of other multi-robot systems. Therefore, the simulation system is sufficient in simulating multi-robot collaboration.
Computer Vision and Image Understanding | 2017
Qinghua Yu; Jie Liang; Junhao Xiao; Huimin Lu; Zhiqiang Zheng
Abstract RGB-D cameras have been attracting increasing researches for solving traditional problems in the domain of computer vision and robotics. Among the existing local features, most are proposed for the color channel or depth channel separately, while little attention has been paid to designing new composite features based on the physical characteristics. In this work, we propose a novel perspective invariant feature transform (PIFT) for RGB-D images. We integrate the color and depth information together making full use of the intrinsic characteristics of the two types of information to enhance the robustness and adaptability to large spatial variations of local appearance. The depth information is used to project the feature patch to its tangent plane to make it consistent with different views. It also helps to filter out the “fake keypoints” which are unstable in 3D space. Binary descriptors are then generated in the feature patches using a color coding method. Experiments on publicly available RGB-D datasets show that the proposed method has the best precision and the second best recall rate comparing against state-of-the-art local features, when applied to feature matching with large spatial variations.
robot soccer world cup | 2014
Huimin Lu; Qinghua Yu; Dan Xiong; Junhao Xiao; Zhiqiang Zheng
Effective object motion estimation is significant to improve the performance of soccer robots in RoboCup Middle Size League. In this paper, a hybrid vision system is constructed by combining omnidirectional vision and stereo vision for ball recognition and motion estimation in three-dimensional (3D) space. When the ball is located on the ground field, a novel algorithm based on RANSAC and Kalman filter is proposed to estimate the ball velocity using the omnidirectional vision. When the ball is kicked up into the air, an iterative and coarse-to-fine method is proposed to fit the moving trace of the ball with paraboic curve and predict the touchdown-point in 3D space using the stereo vision. Experimental results show that the robot can effectively estimate ball motion in 3D space using the hybrid vision system and the proposed algorithms, furthermore, the advantages of the 360\(^\circ \) field of view of the omnidirectional vision and the high object localization accuracy of the stereo vision in 3D space can be combined.
international conference on computer vision systems | 2017
Xieyuanli Chen; Huimin Lu; Junhao Xiao; Hui Zhang; Pan Wang
Remarkable performance has been achieved using the state-of-the-art monocular Simultaneous Localization and Mapping (SLAM) algorithms. However, tracking failure is still a challenging problem during the monocular SLAM process, and it seems to be even inevitable when carrying out long-term SLAM in large-scale environments. In this paper, we propose an active loop closure based relocalization system, which enables the monocular SLAM to detect and recover from tracking failures automatically even in previously unvisited areas where no keyframe exists. We test our system by extensive experiments including using the most popular KITTI dataset, and our own dataset acquired by a hand-held camera in outdoor large-scale and indoor small-scale real-world environments where man-made shakes and interruptions were added. The experimental results show that the least recovery time (within 5 ms) and the longest success distance (up to 46 m) were achieved comparing to other relocalization systems. Furthermore, our system is more robust than others, as it can be used in different kinds of situations, i.e., tracking failures caused by the blur, sudden motion and occlusion. Besides robots or autonomous vehicles, our system can also be employed in other applications, like mobile phones, drones, etc.
Studies in computational intelligence | 2017
Junhao Xiao; Dan Xiong; Weijia Yao; Qinghua Yu; Huimin Lu; Zhiqiang Zheng
This chapter presents the lesson learned during constructing the software system and simulation environment for our RoboCup Middle Size League (MSL) robots. The software is built based on ROS, thus the advantages of ROS such as modularity, portability and expansibility are inherited. The tools provided by ROS, such as RVIZ, rosbag, rqt_graph just to name a few, can improve the efficiency of development. Furthermore, the standard communication mechanism (topic and service) and software organization method (package and meta-package) introduces the opportunity for sharing codes among the RoboCup MSL community, which is a fundamental issue to forming hybrid teams. As known, to evaluate new algorithms for multi-robot collaboration on real robots is expensive, which can be done in a proper simulation environment. Particularly, it would be nice if the ROS based software can also be applied to control the simulated robots. As a result, the open source simulator Gazebo is selected, which offers a convenient interface with ROS. In this case, a Gazebo based simulation environment is constructed to visualize the robots and simulate their motions. Furthermore, the simulation has also been used to evaluate new multi-robot collaboration algorithms for our NuBot RoboCup MSL robot team.
International Journal of Advanced Robotic Systems | 2018
Hui Zhang; Xieyuanli Chen; Huimin Lu; Junhao Xiao
In this article, we propose a distributed and collaborative monocular simultaneous localization and mapping system for the multi-robot system in large-scale environments, where monocular vision is the only exteroceptive sensor. Each robot estimates its pose and reconstructs the environment simultaneously using the same monocular simultaneous localization and mapping algorithm. Meanwhile, they share the results of their incremental maps by streaming keyframes through the robot operating system messages and the wireless network. Subsequently, each robot in the group can obtain the global map with high efficiency. To build the collaborative simultaneous localization and mapping architecture, two novel approaches are proposed. One is a robust relocalization method based on active loop closure, and the other is a vision-based multi-robot relative pose estimating and map merging method. The former is used to solve the problem of tracking failures when robots carry out long-term monocular simultaneous localization and mapping in large-scale environments, while the latter uses the appearance-based place recognition method to determine multi-robot relative poses and build the large-scale global map by merging each robot’s local map. Both KITTI data set and our own data set acquired by a handheld camera are used to evaluate the proposed system. Experimental results show that the proposed distributed multi-robot collaborative monocular simultaneous localization and mapping system can be used in both indoor small-scale and outdoor large-scale environments.
international symposium on safety, security, and rescue robotics | 2017
Xieyuanli Chen; Hui Zhang; Huimin Lu; Junhao Xiao; Qihang Qiu; Yi Li
In this paper, we propose a monocular SLAM system for robotic urban search and rescue (USAR), based on which most USAR tasks (e.g. localization, mapping, exploration and object recognition) can be fulfilled by rescue robots with only a single camera. The proposed system can be a promising basis to implement fully autonomous rescue robots. However, the feature-based map built by the monocular SLAM is difficult for the operator to understand and use. We therefore combine the monocular SLAM with a 2D LIDAR SLAM to realize a 2D mapping and 6D localization SLAM system which can not only obtain a real scale of the environment and make the map more friendly to users, but also solve the problem that the robot pose cannot be tracked by the 2D LIDAR SLAM when the robot climbing stairs and ramps. We test our system using a real rescue robot in simulated disaster environments. The experimental results show that good performance can be achieved using the proposed system in the USAR. The system has also been successfully applied and tested in the RoboCup Rescue Robot League (RRL) competitions, where our rescue robot team entered the top 5 and won the Best in Class small robot mobility in 2016 RoboCup RRL Leipzig Germany, and the champions of 2016 and 2017 RoboCup China Open RRL competitions.
International Journal of Advanced Robotic Systems | 2017
Junhao Xiao; Huimin Lu; Lilian Zhang; Jianhua Zhang
This article reports our research results on an autonomous forklift, with the focus on pallet recognition and localization using an RGB-D camera. It is a fundamental issue for unmanned storehouses, which enables the forklift to insert the forks within the pallet’s slots for loading and unloading packages. Particularly, a pallet recognition and localization approach is presented. The range image is firstly segmented into planar patches based on a region growing algorithm. Then, the segments are filtered heuristically according to the storehouse environment. Afterward, a template matching method is utilized to recognize pallets in the remained segments, based on calculating the degree of similarity at each location during sliding the templates on the segment. Once a pallet has been recognized, its pose is calculated straightforward. The article has three main contributions, that is, a low-cost RGB-D camera is employed for pallet recognition and localization, where only depth information has been utilized; using the proposed method, multiple kinds of pallets can be used at the same time, which provides a flexibility for the storehouse; and furthermore, the method has a good expansibility to allow the storehouse to adopt new pallets easily.