Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jiabao Wang is active.

Publication


Featured researches published by Jiabao Wang.


Archive | 2012

A Framework for Moving Target Detection, Recognition and Tracking in UAV Videos

Jiabao Wang; Yafei Zhang; Jianjiang Lu; Weiguang Xu

In this paper, we present a compound framework for moving target detection, recognition and tracking based on different altitude UAV-captured videos. The novel idea of “Divide and Merge” included in our framework is expressed as follows. Firstly, we detect the small and slow moving targets using forward-backward MHI. Secondly, two distinct tracking algorithms, Particle Filter and Mean Shift, are applied to track moving targets in different altitude UAV-captured videos. Then, recognition module divides into two classes: instance recognition and category recognition. The former identifies the target, which is occluded by trees or buildings and reappears later, and the latter classifies the detected target into one category by HoG-based SVM classifier. Besides, recognition-based abnormal target detection and clustering-based abnormal trajectory detection are added to our framework. Armed with this framework, the moving targets can be tracked in real-time and the recognized target or abnormal trajectory gives the alarm in seconds.


IEEE Signal Processing Letters | 2016

Robust Scale Adaptive Kernel Correlation Filter Tracker With Hierarchical Convolutional Features

Yang Li; Yafei Zhang; Yulong Xu; Jiabao Wang; Zhuang Miao

Visual object tracking is a challenging task due to object appearance changes caused by shape deformation, heavy occlusion, background clutters, illumination variation, and camera motion. In this letter, we propose a novel robust algorithm which decomposes the task of tracking into translation and scale estimation. We estimate the translation by using five correlation filters with hierarchical convolutional features which produced multilevel correlation response maps to collaboratively infer the target location. We also calculate the scale variation by another correlation filter with histogram of oriented gradient features at the same time. Extensive experimental results on a large-scale 50 challenging benchmark dataset show that the proposed algorithm achieved outstanding performance against state-of-the-art methods.


intelligent information technology application | 2009

A Framework of CBIR System Based on Relevance Feedback

Jianjiang Lu; Zhenghui Xie; Ran Li; Yafei Zhang; Jiabao Wang

Content-based image retrieval (CBIR) is an effective approach for obtaining desired image, however, due to the semantic gap between low-level visual features and high-level concept of image, CBIR system of state-of-the-art always can’t achieve satisfying retrieval performance. In this paper, we propose a novel CBIR system framework. In order to bridge the semantic gap, the mechanism of relevance feedback is involved in the system. More various features are included at low level, which can provide more abundant image content description. A bi-coded chromosome based genetic algorithm is performed to obtain optimal features and relevant optimal weights based on users’ relevance feedback. With the optimal feature set and optimal weights, the similarity between image in original searching results and query image is considered to be the main factor of rank score.


asia-pacific conference on wearable computing systems | 2010

Video Analysis and Trajectory Based Video Annotation System

Yang Li; Yafei Zhang; Jianjiang Lu; Ran Lim; Jiabao Wang

During the last years, Automatic video analysis has become a very important research for video management, such as video index and video retrieval. The application domains are disparate, ranging from video surveillance to automatic video annotation for sport videos or TV shots. Whatever the application field, most of the works in video analysis are based on two main approaches: the former based on explicit event recognition, focused on finding high level, semantic interpretations of video sequences, and the latter based on anomaly detection. In this paper, we deals with the first approach, where the final goal is to labeling of recognized video event. In order to achieve the goal of automated analysis and annotate events in videos, we have developed a novel video analysis and trajectory based video annotation system called VATAS. The system involves four main modules, which include global motion estimation, motion object detection, object tracking and video annotation. Experimental results prove the validity of the proposed approach.


IEEE Signal Processing Letters | 2017

MS-RMAC: Multiscale Regional Maximum Activation of Convolutions for Image Retrieval

Yang Li; Yulong Xu; Jiabao Wang; Zhuang Miao; Yafei Zhang

Recent works have demonstrated that image descriptors produced by convolutional feature maps provide state-of-the-art performance for image retrieval and classification problems. However, features from a single convolutional layer are not robust enough for shape deformation, scale variation, and heavy occlusion. In this letter, we present a simple and straightforward approach for extracting multiscale (MS) regional maximum activation of convolutions features from different layers of the convolutional neural network. And we also propose aggregating MS features into a single vector by a parameter-free hedge method for image retrieval. Extensive experimental results on three challenging benchmark datasets indicate that the proposed method achieved outstanding performance against state-of-the-art methods.


international congress on image and signal processing | 2013

Seatbelt detection based on cascade Adaboost classifier

Wei Li; Jianjiang Lu; Yang Li; Yafei Zhang; Jiabao Wang; Hang Li

Vehicle safety is increasingly becoming a concern. Whether the driver is wearing a seatbelt and whether the vehicle is speeding out or not become important indicators of the vehicle safety. However, manually searching, detecting, recording and other work will spend a lot of manpower and time inefficiently. This paper proposes a cascade Adaboost classifier based seatbelt detection system to detect the vehicle windows, to complete Canny edge detection on gradient map of vehicle window images, and to perform the probabilistic Hough transform to extract the straight-lines of seatbelts. The system achieves the goal of seatbelt detection intelligently.


IEEE Signal Processing Letters | 2016

Patch-based Scale Calculation for Real-time Visual Tracking

Yulong Xu; Jiabao Wang; Hang Li; Yang Li; Zhuang Miao; Yafei Zhang

Robust scale calculation is a challenging problem in visual tracking. Most existing trackers fail to handle large scale variations in complex videos. To address this issue, we propose a robust and efficient scale calculation method in tracking-by-detection framework, which divides the target into four patches and computes the scale factor by finding the maximum response position of each patch via color attributes kernelized correlation filter. In particular, we employ the weighting coefficients to remove the abnormal matching points and transform the desired training output of the conventional classifier to solve the location ambiguity problem. Experiments are performed on several challenging color sequences with scale variations in the recent benchmark evaluation. And the results show that our method outperforms state-of-the-art tracking methods while operating in real-time.


Journal of Computers | 2013

Target Detection and Pedestrian Recognition in Infrared Images

Jiabao Wang; Yafei Zhang; Jianjiang Lu; Yang Li

By improving the local contrast between targets and background in the static infrared images, a simple and effective background model is proposed to detect targets. At the same time, a novel learning algorithm is presented for training a discriminatively trained, part-based model with only positives images, for pedestrian recognition. The background models are constructed based on the static infrared images by morphological operations. Meanwhile, the learning algorithm is based on the ramp loss function, which can filter out the false negatives from the collected negative examples. It has a great advantage on training the deformable part models with latent variables when the dataset has a large number of noisy examples. Experiments manifest that our background model can achieve a high precision in target detection and the discriminative part model trained by the proposed learning approach can recognize the targets well and truly, with the help of target detection.


Archive | 2012

An Unsupervised Framework of Video Event Analysis

Weiguang Xu; Jianjiang Lu; Yafei Zhang; Jiabao Wang

Video event analysis has become the hottest topic of research in computer vision. Previous works found models based upon statistical outperform other models in many aspects grammar. However, the manner of manually defining production rules limits their capability of generalization. In the paper, we adopt Liang’s nonparametric model of HDP-SCFG, and present an unsupervised framework to overcome the limit. We mainly contribute in three aspects. 1) It is the first time that transplant nonparametric grammar, ISCFG, to the area of event analysis in video. 2) We define event primitives and construct their sequence for single/double-agent respectively, instead of the previous mixed-up manner. The ability of describing single/multi-agent event is enhanced. 3) We present a modified version of Earley-Stolcke parser (MES). An additional variable is attached to each state for accumulating the penalty. The MES enhances the robustness to the unexpectable low-level errors, but increase little computing complexity.


intelligent data engineering and automated learning | 2016

Very Deep Neural Network for Handwritten Digit Recognition

Yang Li; Hang Li; Yulong Xu; Jiabao Wang; Yafei Zhang

Handwritten digit recognition is an important but challenging task. However, how to build an efficient artificial neural network architecture that can match human performance on the task of recognition of handwritten digit is still a difficult problem. In this paper, we proposed a new very deep neural network architecture for handwritten digit recognition. What is remarkable is that we did not depart from the classical convolutional neural networks architecture, but pushed it to the limit by substantially increasing the depth. By a carefully crafted design, we proposed two different basic building block and increase the depth of the network while keeping the computational budget constant. On the very competitive MNIST handwriting benchmark, our method achieve the best error rate ever reported on the original dataset (\(0.47\,\% \pm 0.05\,\%\)), without data distortion or model combination, demonstrating the superiority of our work.

Collaboration


Dive into the Jiabao Wang's collaboration.

Top Co-Authors

Avatar

Yang Li

University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Yafei Zhang

University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Zhuang Miao

University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Yulong Xu

University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Jianjiang Lu

University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Hang Li

University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Weiguang Xu

University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Xiancai Zhang

University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Wei Li

University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Bo Zhou

University of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge