Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yulong Xu is active.

Publication


Featured researches published by Yulong Xu.


IEEE Signal Processing Letters | 2016

Robust Scale Adaptive Kernel Correlation Filter Tracker With Hierarchical Convolutional Features

Yang Li; Yafei Zhang; Yulong Xu; Jiabao Wang; Zhuang Miao

Visual object tracking is a challenging task due to object appearance changes caused by shape deformation, heavy occlusion, background clutters, illumination variation, and camera motion. In this letter, we propose a novel robust algorithm which decomposes the task of tracking into translation and scale estimation. We estimate the translation by using five correlation filters with hierarchical convolutional features which produced multilevel correlation response maps to collaboratively infer the target location. We also calculate the scale variation by another correlation filter with histogram of oriented gradient features at the same time. Extensive experimental results on a large-scale 50 challenging benchmark dataset show that the proposed algorithm achieved outstanding performance against state-of-the-art methods.


IEEE Signal Processing Letters | 2017

MS-RMAC: Multiscale Regional Maximum Activation of Convolutions for Image Retrieval

Yang Li; Yulong Xu; Jiabao Wang; Zhuang Miao; Yafei Zhang

Recent works have demonstrated that image descriptors produced by convolutional feature maps provide state-of-the-art performance for image retrieval and classification problems. However, features from a single convolutional layer are not robust enough for shape deformation, scale variation, and heavy occlusion. In this letter, we present a simple and straightforward approach for extracting multiscale (MS) regional maximum activation of convolutions features from different layers of the convolutional neural network. And we also propose aggregating MS features into a single vector by a parameter-free hedge method for image retrieval. Extensive experimental results on three challenging benchmark datasets indicate that the proposed method achieved outstanding performance against state-of-the-art methods.


IEEE Signal Processing Letters | 2016

Patch-based Scale Calculation for Real-time Visual Tracking

Yulong Xu; Jiabao Wang; Hang Li; Yang Li; Zhuang Miao; Yafei Zhang

Robust scale calculation is a challenging problem in visual tracking. Most existing trackers fail to handle large scale variations in complex videos. To address this issue, we propose a robust and efficient scale calculation method in tracking-by-detection framework, which divides the target into four patches and computes the scale factor by finding the maximum response position of each patch via color attributes kernelized correlation filter. In particular, we employ the weighting coefficients to remove the abnormal matching points and transform the desired training output of the conventional classifier to solve the location ambiguity problem. Experiments are performed on several challenging color sequences with scale variations in the recent benchmark evaluation. And the results show that our method outperforms state-of-the-art tracking methods while operating in real-time.


intelligent data engineering and automated learning | 2016

Very Deep Neural Network for Handwritten Digit Recognition

Yang Li; Hang Li; Yulong Xu; Jiabao Wang; Yafei Zhang

Handwritten digit recognition is an important but challenging task. However, how to build an efficient artificial neural network architecture that can match human performance on the task of recognition of handwritten digit is still a difficult problem. In this paper, we proposed a new very deep neural network architecture for handwritten digit recognition. What is remarkable is that we did not depart from the classical convolutional neural networks architecture, but pushed it to the limit by substantially increasing the depth. By a carefully crafted design, we proposed two different basic building block and increase the depth of the network while keeping the computational budget constant. On the very competitive MNIST handwriting benchmark, our method achieve the best error rate ever reported on the original dataset (\(0.47\,\% \pm 0.05\,\%\)), without data distortion or model combination, demonstrating the superiority of our work.


international conference on wireless communications and signal processing | 2016

Deep feature hash codes framework for content-based image retrieval

Yang Li; Yulong Xu; Zhuang Miao; Hang Li; Jiabao Wang; Yafei Zhang

For large-scale image retrieval, high dimensional features make the retrieval system inefficiency. In this paper, we propose a framework of deep feature hash codes for content-based image retrieval system. In this framework, we firstly extract image features by a pre-trained convolutional neural networks model. Secondly, we use different hashing methods for binary feature extraction. Finally, we use the best binary encoding features to build a content-based image retrieval system. The experimental results demonstrate that with the decrease of feature dimension, our method not only does not reduce the retrieval precision, but also can improve the retrieval accuracy in some cases. The retrieval accuracy of 256 bits binary features can surpass the traditional method of 256 dimensional (4096 bits) features. Once the feature bits are 16 times lower, the storage space will decrease 16 times and the retrieval efficiency will be greatly increased. Therefore, our method can effectively improve the speed and precision of content-based image retrieval system.


international conference on image vision and computing | 2017

Euclidean output layer for discriminative feature extraction

Jiabao Wang; Yang Li; Zhuang Miao; Yulong Xu; Gang Tao

Recently, visual features extracted by convolutional neural networks (CNNs) have been widely used in computer vision. Most state-of-the-art CNNs adopt a convolutional layer to map the high dimensional features into the number of the output classes. However, it is not good enough for feature similarity comparison. So we propose a new layer, Euclidean output layer, for extracting discriminative features in this paper. Furthermore, we use the joint supervision of the center loss and the softmax loss to construct a discriminative feature learning network. Experiments show that our network has the ability of compacting the distribution of the learned features and achieves better performance for unclose-set identification problems.


international conference on big data | 2017

DeepSAR-Net: Deep convolutional neural networks for SAR target recognition

Yang Li; Jiabao Wang; Yulong Xu; Hang Li; Zhuang Miao; Yafei Zhang

In this paper, we propose a new deep artificial neural network architecture for synthetic aperture radar target recognition codenamed DeepSAR-Net. Unlike most existing methods, this approach can learn discriminative features directly from the training data instead of requiring pre-specification or pre-selection by a human designer. Furthermore, our method is adaptable, and it is learning to recognize new targets which do not require lots of time and data resources. Experimental results on the MSTAR dataset show that the DeepSAR-Net framework yield dramatic improvement in SAR ATR compared with the state-of-the-art techniques.


international conference on artificial intelligence | 2017

Does ResNet Learn Good General Purpose Features

Yang Li; Yafei Zhang; Yulong Xu; Zhuang Miao; Hang Li

ResNet with hundreds or even thousands of layers has become the most successful image recognition model in the computer vision community. However, we do not know whether this deeper architecture has better transferability than the traditional models. In this paper, we systematically investigate the well-known ResNet features for different computer vision tasks without fine-tuning. The experimental results show that ResNet did not learn good general purpose features in image retrieval and visual object tracking tasks. GoogLeNet high-level features and VGGNet middle-level features achieve better performance for transfer learning. Therefore, it is not a good idea to choose ResNet model in other computer vision tasks without fine-tuning.


international conference on wireless communications and signal processing | 2016

Dual Channel Gradient feature for person re-identification

Jiabao Wang; Yang Li; Hang Li; Yulong Xu; Zhuang Miao; Gengning Zhang

Recently, several effective features were proposed for person re-identification, such as Weight Histograms of Overlapping Stripes (WHOS) and Local Maximal Occurrence (LOMO), but it still need to explore new effective feature to improve the precision for person re-identification. So, in this paper, we proposed a new Dual Channel Gradient feature, which can be fused with WHOS and LOMO by directly concatenating the normalized histograms. Experimental results showed that the fused feature can achieve the state-of-the-art accuracy, on four public datasets with two different metric learning methods.


international conference on signal processing | 2016

Scale-adaptive visual tracking with occlusion detection

Yulong Xu; Jiabao Wang; Yang Li; Zhuang Miao; Ming He; Yafei Zhang

Occlusion is a challenging problem in visual object tracking. Most state-of-the-art trackers may learn the appearance of the occluding target when it becomes occluded by other objects in the scene. This paper proposes a novel approach of detecting occlusion by dividing the target into several patches and computing the peak-to-sidelobe ratio of every response map. Furthermore, our method can calculate the scale factor by finding the maximum response position of each patch. Experiments are performed on several benchmark challenging sequences. And the results show that our approach outperforms state-of-the-art tracking methods while operating in real-time.

Collaboration


Dive into the Yulong Xu's collaboration.

Top Co-Authors

Avatar

Yang Li

University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Jiabao Wang

University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Zhuang Miao

University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Yafei Zhang

University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Hang Li

University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Bo Zhou

University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Gengning Zhang

University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Ming He

University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Shan Zou

University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Weiguang Xu

University of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge