Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Zhiqiang Hou.
Optical Engineering | 2017
Zhiqiang Hou; Wangsheng Yu; Yang Xue; Zefenfen Jin; Bo Dai
Abstract. In visual tracking, deep learning with offline pretraining can extract more intrinsic and robust features. It has significant success solving the tracking drift in a complicated environment. However, offline pretraining requires numerous auxiliary training datasets and is considerably time-consuming for tracking tasks. To solve these problems, a multiscale sparse networks-based tracker (MSNT) under the particle filter framework is proposed. Based on the stacked sparse autoencoders and rectifier linear unit, the tracker has a flexible and adjustable architecture without the offline pretraining process and exploits the robust and powerful features effectively only through online training of limited labeled data. Meanwhile, the tracker builds four deep sparse networks of different scales, according to the target’s profile type. During tracking, the tracker selects the matched tracking network adaptively in accordance with the initial target’s profile type. It preserves the inherent structural information more efficiently than the single-scale networks. Additionally, a corresponding update strategy is proposed to improve the robustness of the tracker. Extensive experimental results on a large scale benchmark dataset show that the proposed method performs favorably against state-of-the-art methods in challenging environments.
Multimedia Tools and Applications | 2017
Wangsheng Yu; Zhiqiang Hou; Dan Hu; Peng Wang
In this paper, a robust mean shift tracking algorithm based on refined appearance model (RAM) and online update strategy is proposed. The main idea of the proposed algorithm is to construct a more accurate appearance model to improve tracking precision and design an online update strategy to adjust to the appearance variation. At the beginning of the tracking, the simple mean shift tracking algorithm is applied on the first few frames to collect a set of target templates, which contains both foreground and background of the target. During the model construction, simple linear iterative clustering (SLIC) algorithm is exploited to obtain the superpixels of the target templates, and the superpixels are further clustered to distinguish the foreground from background. A weighted vector is then obtained based on the classified foreground from background, which is utilized to modify the kernel histogram appearance model. The following frames are processed based on the mean shift tracking algorithm with the modified appearance model, and the stable tracking results with no occlusion will be selected to update the appearance model. The concrete operation of model update is the same as model construction. Experiment results on some challenging test sequences indicate that the proposed algorithm can well cope with both appearance variation and background change to obtain a robust tracking performance. A further comprehensive experiment on OTB2013 demonstrates that the proposed tracking algorithm outperforms the state-of-the-art works in most cases.
Iet Computer Vision | 2017
Zefenfen Jin; Zhiqiang Hou; Wangsheng Yu
Aiming at an efficient feature match and similarity search in visual tracking, this study proposes a tracking algorithm based on quantum genetic algorithm. Therein, the global optimisation ability of quantum genetic algorithm is utilised. In the framework of quantum genetic algorithm, the positions of pixels are taken as individuals in population, while scale-invariant feature transform and colour features are taken as target model. Via defining the objective function, individuals fitness values can be measured. Visual tracking is realised when the pixel point with the biggest fitness value is searched and its corresponding position is returned. The experiment results show that the tracking algorithm the authors proposed performs more efficiently when it is compared with the state-of-the-art tracking algorithms.
IEEE Transactions on Systems, Man, and Cybernetics | 2017
Zhiqiang Hou; Wangsheng Yu; Zefenfen Jin; Yufei Zha; Xianxiang Qin
Convolutional neural networks can efficiently exploit sophisticated hierarchical features which have different properties for visual tracking problem. In this paper, by using multilayer convolutional features jointly and constructing a scale pyramid, we propose an online scale adaptive tracking method. We construct two separate correlation filters for translation and scale estimations. The translation filters improve the accuracy of target localization by a weighted fusion of multiple convolutional layers. Meanwhile, the separate scale filters achieve the optimal and fast scale estimation by a scale pyramid. This design decreases the mutual errors of translation and scale estimations, and reduces computational complexity efficiently. Moreover, in order to solve the problem of tracking drifts due to the severe occlusion or serious appearance changes of the target, we present a new adaptive and selective update mechanism to update the translation filters effectively. Extensive experimental results show that our proposed method achieves the excellent overall performance compared with the state-of-the-art methods.
Pattern Recognition | 2018
Zhiqiang Hou; Wangsheng Yu; Lei Pu; Zefenfen Jin; Xianxiang Qin
Abstract Visual tracking is still a challenging task as the objects suffer significant appearance changes, fast motion, and serious occlusion. In this paper, we propose an occlusion-aware part-based tracker for robust visual tracking. We first present a novel occlusion-aware part-based model based on correlation filters to integrate the global model and part-based model adaptively. It can effectively employ both the global and local information to improve the robustness of the tracker. Then we propose an integral pipeline aiming to the long-term tracking under the correlation filters, which achieves the state-of-the-art performance. In this tracking pipeline, we adopt separate translation and scale estimation. For translation estimation, we exploit and jointly learn the hierarchical features of deep Convolutional Neural Networks (CNNs) to locate the target center accurately. Then we learn an independent scale correlation filter to handle the scale variation. This design realizes scale adaptation of the target preferably, and reduces computational complexity efficiently. We further ameliorate the model update method by introducing the original reliable information. It greatly alleviates the error accumulation of the incorrect information and efficiently achieves long-term tracking. Extensive experimental results on several different challenging benchmark datasets show that our proposed tracker achieves outstanding performance against the state-of-the-art methods.
Ninth International Conference on Graphic and Image Processing (ICGIP 2017) | 2018
Zefenfen Jin; Zhiqiang Hou; Wangsheng Yu; Hui Sun
Aiming at the problem of complicated dynamic scenes in visual target tracking, a multi-feature fusion tracking algorithm based on covariance matrix is proposed to improve the robustness of the tracking algorithm. In the frame-work of quantum genetic algorithm, this paper uses the region covariance descriptor to fuse the color, edge and texture features. It also uses a fast covariance intersection algorithm to update the model. The low dimension of region covariance descriptor, the fast convergence speed and strong global optimization ability of quantum genetic algorithm, and the fast computation of fast covariance intersection algorithm are used to improve the computational efficiency of fusion, matching, and updating process, so that the algorithm achieves a fast and effective multi-feature fusion tracking. The experiments prove that the proposed algorithm can not only achieve fast and robust tracking but also effectively handle interference of occlusion, rotation, deformation, motion blur and so on.
Iet Image Processing | 2018
Bo Dai; Zhiqiang Hou; Wangsheng Yu; Feng Zhu; Zefenfen Jin
The authors present a novel online visual tracking algorithm via ensemble autoencoder (AE). In contrast to other existing deep model based trackers, the proposed algorithm is based on the theory that the image resolution has an influence on vision procedures. When the authors employ a deep neural network to represent the object, the resolution is corresponding to the network size. The authors apply a small network to represent the pattern in a relatively lower resolution and search the object in a relatively larger area of the neighbourhood. After roughly estimating the location of the object, the authors apply a large network, which can provide more detailed information, to estimate the state of the object more accurately. Thus, the authors employ a small AE mainly for position searching and a larger one mainly for scale estimating. When tracking an object, the two networks interact to operate under the framework of particle filtering. Extensive experiments on the benchmark dataset show that the proposed algorithm performs favourably compared with some state-of-the-art methods.
international conference on image and graphics | 2017
Zhiqiang Hou; Wangsheng Yu; Zefenfen Jin
Deep learning can explore robust and powerful feature representations from data and has gained significant attention in visual tracking tasks. However, due to its high computational complexity and time-consuming training process, the most existing deep learning based trackers require an offline pre-training process on a large scale dataset, and have low tracking speeds. Therefore, aiming at these difficulties of the deep learning based trackers, we propose an online deep learning tracker based on Sparse Auto-Encoders (SAE) and Rectifier Linear Unit (ReLU). Combined ReLU with SAE, the deep neural networks (DNNs) obtain the sparsity similar to the DNNs with offline pre-training. The inherent sparsity make the deep model get rid of the complex pre-training process and can be used for online-only tracking well. Meanwhile, the technique of data augmentation is employed in the single positive sample to balance the quantities of positive and negative samples, which improve the stability of the model to some extent. Finally, in order to overcome the problem of randomness and drift of particle filter, we adopt a local dense sampling searching method to generate a local confidence map to locate the target’s position. Moreover, several corresponding update strategies are proposed to improve the robustness of the proposed tracker. Extensive experimental results show the effectiveness and robustness of the proposed tracker in challenging environment against state-of-the-art methods. Not only the proposed tracker leaves out the complicated and time-consuming pre-training process efficiently, but achieves an online fast and robust tracking.
Applied Optics | 2017
Zefenfen Jin; Zhiqiang Hou; Wangsheng Yu; Chuanhua Chen
It is difficult for a single-feature tracking algorithm to achieve strong robustness under a complex environment. To solve this problem, we proposed a multifeature fusion tracking algorithm that is based on game theory. By focusing on color and texture features as two gamers, this algorithm accomplishes tracking by using a mean shift iterative formula to search for the Nash equilibrium of the game. The contribution of different features is always keeping the state of optical balance, so that the algorithm can fully take advantage of feature fusion. According to the experiment results, this algorithm proves to possess good performance, especially under the condition of scene variation, target occlusion, and similar interference.
international conference on wireless communications and signal processing | 2016
Zefenfen Jin; Zhiqiang Hou; Xianglin Wang; Wangsheng Yu
Aim at improving the robustness by employing single feature tracking algorithm in visual object tracking realm, we put forward an object tracking algorithm based on the thought of game theory via multi-feature fusion. Under the framework of Mean Shift visual tracking, treating the color features and motion features expressed by optical flow as two gamers, by searching the Nash Equilibrium of their game, makes various features contribution reach the optimum balance, then the advantages of feature fusion are reflected better. According to experimental results, this algorithm is robust to the drastic motion of object, obstacle occlusion and background interference. This study proposes a new algorithm, which base on traditional Mean Shift algorithm and use multi-feature fusion with game theory. The algorithm shows a good tracking performance.