Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Qingjie Zhao is active.

Publication


Featured researches published by Qingjie Zhao.


Pattern Recognition | 2014

Abrupt motion tracking using a visual saliency embedded particle filter

Yingya Su; Qingjie Zhao; Liujun Zhao; Dongbing Gu

Abrupt motion is a significant challenge that commonly causes traditional tracking methods to fail. This paper presents an improved visual saliency model and integrates it to a particle filter tracker to solve this problem. Once the target is lost, our algorithm recovers tracking by detecting the target region from salient regions, which are obtained in the saliency map of current frame. In addition, to strengthen the saliency of target region, the target model is used as a prior knowledge to calculate a weight set which is utilized to construct our improved saliency map adaptively. Furthermore, we adopt the covariance descriptor as the appearance model to describe the object more accurately. Compared with several other tracking algorithms, the experimental results demonstrate that our method is more robust in dealing with various types of abrupt motion scenarios.


Signal Processing-image Communication | 2016

Blind image quality assessment by relative gradient statistics and adaboosting neural network

Lixiong Liu; Yi Hua; Qingjie Zhao; Hua Huang; Alan C. Bovik

The image gradient is a commonly computed image feature and a potentially predictive factor for image quality assessment (IQA). Indeed, it has been successfully used for both full- and no- reference image quality prediction. However, the gradient orientation has not been deeply explored as a predictive source of information for image quality assessment. Here we seek to amend this by studying the quality relevance of the relative gradient orientation, viz., the gradient orientation relative to the surround. We also deploy a relative gradient magnitude feature which accounts for perceptual masking and utilize an AdaBoosting back-propagation (BP) neural network to map the image features to image quality. The generalization of the AdaBoosting BP neural network results in an effective and robust quality prediction model. The new model, called Oriented Gradients Image Quality Assessment (OG-IQA), is shown to deliver highly competitive image quality prediction performance as compared with the most popular IQA approaches. Furthermore, we show that OG-IQA has good database independence properties and a low complexity. OG-IQA extracts a 6-dimensional relative gradient feature vector from the inputs.OG-IQA utilizes an AdaBoosting BP neural network to map the image features to image quality.OG-IQA delivers highly competitive image quality prediction performance and has a relatively low time complexity.


IEEE Sensors Journal | 2016

Distributed Multi-Target Tracking Based on the K-MTSCF Algorithm in Camera Networks

Yanming Chen; Qingjie Zhao; Zhulin An; Peng Lv; Liujun Zhao

It is a challenging task to develop an effective multi-target tracking algorithm for camera networks due to the factors, such as spurious measurement, limited field of view, complexity of data association algorithm, and so on. In a real-life environment, the system model for camera networks is usually nonlinear, and hence, a Kalman filter may be inappropriate for modeling this system. Besides, the main drawback of traditional joint probabilistic data association (JPDA) is prone to raise the combinatorial explosion problem when the association probabilities have to be calculated. To solve these problems, a multi-target square-root cubature information weighted consensus filter (MTSCF) combined with a K-best joint probabilistic data association algorithm is proposed in this paper. The proposed K-MTSCF algorithm can not only reduce the effect of data association uncertainty stemming from the ambiguity of measurements and computation complexity of data association algorithm by K-best JPDA, but also increase tracking accuracy and stability using MTSCF algorithm. The experimental results demonstrate that the proposed approach performs favorably against the state-of-the-art methods in terms of accuracy and stability for tracking multiple targets in camera networks.


Sensors | 2015

A Novel Square-Root Cubature Information Weighted Consensus Filter Algorithm for Multi-Target Tracking in Distributed Camera Networks

Yanming Chen; Qingjie Zhao

This paper deals with the problem of multi-target tracking in a distributed camera network using the square-root cubature information filter (SCIF). SCIF is an efficient and robust nonlinear filter for multi-sensor data fusion. In camera networks, multiple cameras are arranged in a dispersed manner to cover a large area, and the target may appear in the blind area due to the limited field of view (FOV). Besides, each camera might receive noisy measurements. To overcome these problems, this paper proposes a novel multi-target square-root cubature information weighted consensus filter (MTSCF), which reduces the effect of clutter or spurious measurements using joint probabilistic data association (JPDA) and proper weights on the information matrix and information vector. The simulation results show that the proposed algorithm can efficiently track multiple targets in camera networks and is obviously better in terms of accuracy and stability than conventional multi-target tracking algorithms.


2011 IEEE International Workshop on Open-source Software for Scientific Computation | 2011

Object tracking using improved Camshift with SURF method

Jianhong Li; Ji Zhang; Zhenhuan Zhou; Wei Guo; Bo Wang; Qingjie Zhao

Camshift is an effective algorithm for real time dynamic target tracking applications, which only uses color features and is sensitive to illumination and some other environment factors. When similar color existing in the background, traditional Camshift algorithm may fail, that is the target getting lost. To solve the problem, an improved Camshift algorithm is firstly proposed in this paper to reduce the influence of illumination interference. Besides, a method judging whether the target is lost is also proposed. Once the target is judged lost, the Speeded Up Robust Features (SURF) is utilized to find it again and the improved Camshift keeps on tracking the target continuously. SURF is invariant to scale, rotation and translation of images. We program in C++ based on OpenCV. The results prove that the proposed method is more robust than the traditional Camshift and give better tracking performance than some other improved methods.


international conference on multimedia and expo | 2015

Superpixel tracking via graph-based semi-supervised SVM and supervised saliency detection

Yuxia Wang; Qingjie Zhao

This paper proposes a superpixel tracking method via a graph-based hybrid discriminative-generative appearance model. By utilizing a superpixel-based graph structure as the visual representation, spatial information between superpixels is considered. For constructing the discriminative appearance model, we propose a graph-based semi-supervised support vector machine (SVM) approach by taking superpixels in the current frame as unlabeled training samples and adjusting the classification result utilizing the spatial information provided by a k-regular graph, making the tracker more robust for appearance variation. The adjusted classification result is further used in graph-based supervised saliency detection to generate a generative appearance model, making the real target more salient. Finally, we incorporate the hybrid appearance model into a particle filter framework. Experimental results on five challenging sequences demonstrate that our tracker is robust in dealing with occlusion and shape deformation.


Cognitive Computation | 2016

A Real-Time Active Pedestrian Tracking System Inspired by the Human Visual System

Yuxia Wang; Qingjie Zhao; Bo Wang; Shixian Wang; Yu Zhang; Wei Guo; Zhiquan Feng

AbstractPedestrian detection and tracking play a significant role in surveillance. Despite the numerous detection and tracking methods proposed in the literature, when the pedestrian is too small to recognize, which is a common case in modern surveillance systems, all methods fail. In order to deal with such case, we propose an active pedestrian tracking system inspired by the human visual system. A coarse-to-fine pedestrian detection algorithm is proposed for the small pedestrian detection by combining the Gaussian mixture model background subtraction with the histogram of oriented gradient detection. In addition, a three-dimensional pan–tilt–zoom control model is presented, which requires no calibration and is more accurate than other control models. In order to actively track a pedestrian in real time, we utilize an active control algorithm and a tracking–learning–detection tracker. Experimental results demonstrate that our active tracking system is both efficient and effective.


Neurocomputing | 2015

Abrupt motion tracking via nearest neighbor field driven stochastic sampling

Tianfei Zhou; Yao Lu; Feng Lv; Huijun Di; Qingjie Zhao; Jian Zhang

Stochastic sampling based trackers have shown good performance for abrupt motion tracking so that they have gained popularity in recent years. However, conventional methods tend to use a two-stage sampling paradigm in which the search space needs to be uniformly explored with an inefficient preliminary sampling phase. In this paper, we propose a novel sampling-based method in the Bayesian filtering framework to address the problem. Within the framework, nearest neighbor field estimation is utilized to compute the importance proposal probabilities, which guide the Markov chain search towards promising regions and thus enhance the sampling efficiency; given the motion priors, a smoothing stochastic sampling Monte Carlo algorithm is proposed to approximate the posterior distribution through a smoothing weight-updating scheme. Moreover, to track the abrupt and the smooth motions simultaneously, we develop an abrupt-motion detection scheme which can discover the presence of abrupt motions during online tracking. Extensive experiments on challenging image sequences demonstrate the effectiveness and the robustness of our algorithm in handling the abrupt motions.


Multimedia Tools and Applications | 2017

An image-based near-duplicate video retrieval and localization using improved Edit distance

Hao Liu; Qingjie Zhao; Hao Wang; Peng Lv; Yanming Chen

The rapid development of social network in recent years has spurred enormous growth of near-duplicate videos. The existence of huge volumes of near-duplicates shows a rising demand on effective near-duplicate video retrieval technique in copyright violation and search result reranking. In this paper, we propose an image-based algorithm using improved Edit distance for near-duplicate video retrieval and localization. By regarding video sequences as strings, Edit distance is used and extended to retrieve and localize near-duplicate videos. Firstly, bag-of-words (BOW) model is utilized to measure the frame similarities, which is robust to spatial transformations. Then, non-near-duplicate videos are filtered out by computing the proposed relative Edit distance similarity (REDS). Next, a detect-and-refine-strategy-based dynamic programming algorithm is proposed to generate the path matrix, which can be used to aggregate scores for video similarity measure and localize the similar parts. Experiments on CC_WEB_VIDEO and TREC CBCD 2011 datasets demonstrated the effectiveness and robustness of the proposed method in retrieval and localization tasks.


Signal, Image and Video Processing | 2015

Robust object tracking via online Principal Component–Canonical Correlation Analysis (P3CA)

Yuxia Wang; Qingjie Zhao

Effective object representation plays a significant role in object tracking. To fulfill the requirements of tracking robustness and effectiveness, in this paper, we propose an adaptive appearance model called Principal Component–Canonical Correlation Analysis (P3CA). P3CA is a compact association of principal component analysis (PCA) and canonical correlation analysis (CCA), which results in robust tracking along with low computation cost. CCA is incorporated into P3CA appearance model for its effectiveness in handling occlusion due to the introduction of canonical correlation score instead of holistic information to evaluate the target goodness. However, it is time consuming and often suffers from Small Sample Size (3S) problem. To address these issues, PCA is incorporated and we obtain our P3CA subspace by performing CCA on the low-dimensional data gained by projecting the high-dimensional observations to the PCA subspaces. In addition, to account for appearance variations, we propose a novel online updating algorithm for P3CA subspace, which updates the PCA and CCA subspaces cooperatively and synchronously. Finally, we incorporate the dynamic P3CA appearance model with the particle filter framework in a probabilistic manner and select the candidate object with the largest weight as the final tracking result. Comparative results on several challenging sequences demonstrate that our tracker performs better than a number of state-of-the-art methods proposed recently in handling partial occlusion and various appearance variations.

Collaboration


Dive into the Qingjie Zhao's collaboration.

Top Co-Authors

Avatar

Peng Lv

Beijing Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Liujun Zhao

Beijing Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Yuxia Wang

Beijing Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hao Liu

Beijing Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Wei Guo

Beijing Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Yanming Chen

Beijing Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Bo Wang

Beijing Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Shahzad Anwar

Beijing Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Weicun Xu

Beijing Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge