Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xiaoguang Zhao is active.

Publication


Featured researches published by Xiaoguang Zhao.


International Journal of Advanced Robotic Systems | 2017

Remember like humans: Visual tracking with cognitive psychological memory model

Ning An; Shiying Sun; Xiaoguang Zhao; Zeng-Guang Hou

Visual tracking is a challenging computer vision task due to the significant observation changes of the target. By contrast, the tracking task is relatively easy for humans. In this article, we propose a tracker inspired by the cognitive psychological memory mechanism, which decomposes the tracking task into sensory memory register, short-term memory tracker, and long-term memory tracker like humans. The sensory memory register captures information with three-dimensional perception; the short-term memory tracker builds the highly plastic observation model via memory rehearsal; the long-term memory tracker builds the highly stable observation model via memory encoding and retrieval. With the cooperative models, the tracker can easily handle various tracking scenarios. In addition, an appearance-shape learning method is proposed to update the two-dimensional appearance model and three-dimensional shape model appropriately. Extensive experimental results on a large-scale benchmark data set demonstrate that the proposed method outperforms the state-of-the-art two-dimensional and three-dimensional trackers in terms of efficiency, accuracy, and robustness.


international conference on mechatronics and automation | 2017

Sequential learning for multimodal 3D human activity recognition with Long-Short Term Memory

Kang Li; Xiaoguang Zhao; Jiang Bian; Min Tan

Capability of recognizing human activities is essential to human robot interaction for an intelligent robot. Traditional methods generally rely on hand-crafted features, which is not strong and accurate enough. In this paper, we present a feature self-learning mechanism for human activity recognition by using three-layer Long Short Term Memory (LSTM) to model long-term contextual information of temporal skeleton sequences for human activities which are represented by the trajectories of skeleton joints. Moreover, we add dropout mechanism and L2 regularization to the output of the three-layer Long Short Term Memory (LSTM) to avoid overfitting, and obtain better representation for feature modeling. Experimental results on a publicly available UTD multimodal human activity dataset demonstrate the effectiveness of the proposed recognition method.


International Journal of Advanced Robotic Systems | 2018

A PCA–CCA network for RGB-D object recognition

Shiying Sun; Ning An; Xiaoguang Zhao; Min Tan

Object recognition is one of the essential issues in computer vision and robotics. Recently, deep learning methods have achieved excellent performance in red-green-blue (RGB) object recognition. However, the introduction of depth information presents a new challenge: How can we exploit this RGB-D data to characterize an object more adequately? In this article, we propose a principal component analysis–canonical correlation analysis network for RGB-D object recognition. In this new method, two stages of cascaded filter layers are constructed and followed by binary hashing and block histograms. In the first layer, the network separately learns principal component analysis filters for RGB and depth. Then, in the second layer, canonical correlation analysis filters are learned jointly using the two modalities. In this way, the different characteristics of the RGB and depth modalities are considered by our network as well as the characteristics of the correlation between the two modalities. Experimental results on the most widely used RGB-D object data set show that the proposed method achieves an accuracy which is comparable to state-of-the-art methods. Moreover, our method has a simpler structure and is efficient even without graphics processing unit acceleration.


international conference on mechatronics and automation | 2017

RGB-D object recognition based on RGBD-PCANet learning

Shiying Sun; Xiaoguang Zhao; Ning An; Min Tan

In this paper, a simple deep learning method namely RGBD-PCANet is proposed for object recognition effectively. The proposed method extends the original PCANet for RGB-D images. Firstly, the RGB and depth images are preprocessed to meet the requirement of the network input layer. Secondly, features of RGB-D images are extracted by the two stages RGBD-PCANet which consists of cascaded PCA, binary hashing, and block-wise histograms. Finally, the SVM method is used as classifier. We evaluate the proposed method on the popular Washington RGB-D Object dataset. Extensive experiments demonstrate that the proposed RGBD-PCANet method achieves comparable performance to state-of-the-art CNN-based methods and the runtimes are low without GPU acceleration.


international conference on pattern recognition | 2016

Online RGB-D tracking via detection-learning-segmentation

Ning An; Xiaoguang Zhao; Zeng-Guang Hou

In this paper, we address the problem of online RGB-D tracking where the target object undergoes significant appearance changes. To sufficiently exploit the color and depth cues, we propose a novel RGB-D tracking framework (DLS) that simultaneously builds the target 2D appearance model and 3D distribution model. The framework decomposes the tracking task into detection, learning and segmentation. The detection and segmentation components locate the target collaboratively by using the two target models. An adaptive depth histogram is proposed in the segmentation component to efficiently locate the target in depth frames. The learning component estimates the detection and segmentation errors, updates the target models from the most confident frames by identifying two kinds of distractors: potential failure and occlusion. Extensive experimental results on a large-scale benchmark dataset show that the proposed method performs favourably against state-of-the-art RGB-D trackers in terms of efficiency, accuracy, and robustness.


International Journal of Automation and Computing | 2004

Seam Tracking and Visual Control for Robotic Arc Welding Based on Structured Light Stereovision

De Xu; Min Tan; Xiaoguang Zhao; Zhiguo Tu


International Journal of Automation and Computing | 2017

PLS-CCA heterogeneous features fusion-based low-resolution human detection method for outdoor video surveillance

Hongkai Chen; Xiaoguang Zhao; Shiying Sun; Min Tan


Archive | 2007

Four-quadrant aligning device of mask transmission system

Min Tan; De Xu; Xiuqing Wang; Xiaoguang Zhao; Yun Liu; Jianhua Wang


ieee international conference on advanced computational intelligence | 2018

Deep-learning-based autonomous navigation approach for UAV transmission line inspection

Xiaoguang Zhao; Min Tan; Xiaolong Hui; Jiang Bian


ieee international conference on advanced computational intelligence | 2018

Design and implementation of an automatic peach-harvesting robot system

Yongjia Yu; Zengpeng Sun; Xiaoguang Zhao; Jiang Bian; Xiaolong Hui

Collaboration


Dive into the Xiaoguang Zhao's collaboration.

Top Co-Authors

Avatar

Min Tan

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Ning An

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Jiang Bian

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Shiying Sun

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Xiaolong Hui

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Zeng-Guang Hou

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

De Xu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Yongjia Yu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Hongkai Chen

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Jianhua Wang

Chinese Academy of Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge