Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xiaofen Xing is active.

Publication


Featured researches published by Xiaofen Xing.


Multimedia Tools and Applications | 2017

Robust object tracking based on sparse representation and incremental weighted PCA

Xiaofen Xing; Fuhao Qiu; Xiangmin Xu; Chunmei Qing; Yinrong Wu

Object tracking plays a crucial role in many applications of computer vision, but it is still a challenging problem due to the variations of illumination, shape deformation and occlusion. A new robust tracking method based on incremental weighted PCA and sparse representation is proposed. An iterative process consisting of a soft segmentation step and a foreground distribution update step is adpoted to estimate the foreground distribution, cooperating with incremental weighted PCA, we can get the target appearance in terms of the PCA components with less impact of the background in the target templates. In order to make the target appearance model more discriminative, trivial and background templates are both added to the dictionary for sparse representation of the target appearance. Experiments show that the proposed method with some level of background awareness is robust against illumination change, occlusion and appearance variation, and outperforms several latest important tracking methods in terms of tracking performance.


international conference on digital signal processing | 2015

Multi-invariance appearance model for object tracking

Guicong Xu; Xiangmin Xu; Xiaofen Xing; Bolun Cai; Chunmei Qing

Visual tracking is a challenging problem in computer vision. Most state-of-the-art visual trackers either rely on intensity information, texture information, or use simple color representations for image description, which cannot provide all-around invariance to different scene conditions. Meanwhile there exists no single tracking approach that can successfully handle all scenarios. Due to the complexity of the tracking problem, the combination of multiple features should be computationally efficient and possess a certain amount of robustness while maintaining high discriminative power. This paper combine intensity information (cross-bin distribute field, CDF), texture information (enhance histograms of oriented gradients, EHOG) and color information (color name, CN) in a tracking-by-detection framework, in which a simple tracker called CSK is extended for multi-dimension and multi-cue fusion. The proposed approach improves the baseline single-cue tracker by 4.4% in distance precision. Furthermore,we show that our approach achieving 75.4% is better than most recent state-of-the-art tracking algorithms.


2017 4th International Conference on Information, Cybernetics and Computational Social Systems (ICCSS) | 2017

A novel deep-learning based framework for multi-subject emotion recognition

Rui Qiao; Chunmei Qing; Tong Zhang; Xiaofen Xing; Xiangmin Xu

Electroencephalogram (EEG) signal based emotion recognition, as a challenging pattern recognition task, has attracted more and more attention in recent years and widely used in medical, Affective Computing and other fields. Traditional approaches often lack of the high-level features and the generalization ability is poor, which are difficult to apply to the practical application. In this paper, we proposed a novel model for multi-subject emotion classification. The basic idea is to extract the high-level features through the deep learning model and transform traditional subject-independent recognition tasks into multi-subject recognition tasks. Experiments are carried out on the DEAP dataset, and our results demonstrate the effectiveness of the proposed method.


pacific rim conference on multimedia | 2016

Exploiting Local Feature Fusion for Action Recognition

Jie Miao; Xiangmin Xu; Xiaoyi Jia; Haoyu Huang; Bolun Cai; Chunmei Qing; Xiaofen Xing

Densely sampled local features with bag-of-words models have been widely applied to action recognition. Conventional approaches assume that different kinds of local features are totally uncorrelated, and they are separately processed, encoded, and then fused at video-level representation. However, these local features are not totally uncorrelated in practice. To address this problem, multi-view local feature fusion is exploited for local descriptor fusion in action recognition. Specifically, tensor canonical correlation analysis (TCCA) is employed to obtain a fused local feature that carries the high-order correlation hidden among different types of local features. The high-order correlation local feature improves the conventional concatenation based fusion approach. Experimental results on three challenging action recognition datasets validate the effectiveness of the proposed approach.


communication systems networks and digital signal processing | 2016

Blurred target tracking based on sparse representation of online updated templates

Xiaofen Xing; Nanhai Zhang; Kailing Guo; Chunmei Qing; Jiali Deng; Huiping Qin

Motion blur is pervasive in object tracking due to the camera and target movement. Most approaches are prone to drift when the target is blurred. Sparse representation of both normal and blur templates are able to improve the robustness of the appearance model, but how to update the template set remains a difficult problem. To solve this problem, we propose a two-step observation correction strategy for template updating by: 1) treating motion blur as trivial information with Laplace distribution and using different ways to correct the normal target and the blur target; 2) utilizing incremental principle component analysis (PCA) to update the normal template set. Experiments on challenging videos show the proposed algorithm outperforms several state-of-the-art methods.


international conference on image processing | 2015

BIT: Bio-inspired tracker

Bolun Cai; Xiangmin Xu; Xiaofen Xing; Chunmei Qing

Visual tracking is a challenging problem due to various factors such as deformation, rotation and illumination. As is well known, given the superior tracking performance of human vision, bio-inspired model is expected to improve the computer visual tracking. However, the design of bio-inspired tracking framework is challenging, due to the incomplete comprehension and hyper-scale of senior neurons, which will influence the effectiveness and real-time performance of the tracker. According to the ventral stream in visual cortex, a novel bio-inspired tracker (BIT) is proposed, which simulates shallow neurons (S1 and C1) to extract low-level bio-inspired feature for target appearance and imitates senior learning mechanism (S2 and C2) to combine generative and discriminative model for position estimation. In addition, Fast Fourier Transform (FFT) is adopted for real-time learning and detection in this framework. On the recent benchmark[1], extensive experimental results show BIT performs favorably against state-of-the-art methods in terms of accuracy and robustness.


IEEE Transactions on Image Processing | 2016

BIT: Biologically Inspired Tracker

Bolun Cai; Xiangmin Xu; Xiaofen Xing; Kui Jia; Jie Miao; Dacheng Tao


international conference on image processing | 2017

Edge/structure preserving smoothing via relativity-of-Gaussian

Bolun Cai; Xiaofen Xing; Xiangmin Xu


international conference on multimedia and expo | 2018

Learning Adaptive Selection Network for Real-Time Visual Tracking

Jiangfeng Xiong; Xiangmin Xu; Bolun Cai; Xiaofen Xing; Kailing Guo


international conference on image processing | 2018

Perception Preserving Decolorization.

Bolun Cai; Xiangmin Xu; Xiaofen Xing

Collaboration


Dive into the Xiaofen Xing's collaboration.

Top Co-Authors

Avatar

Xiangmin Xu

South China University of Technology

View shared research outputs
Top Co-Authors

Avatar

Bolun Cai

South China University of Technology

View shared research outputs
Top Co-Authors

Avatar

Chunmei Qing

South China University of Technology

View shared research outputs
Top Co-Authors

Avatar

Jie Miao

South China University of Technology

View shared research outputs
Top Co-Authors

Avatar

Fuhao Qiu

South China University of Technology

View shared research outputs
Top Co-Authors

Avatar

Guicong Xu

South China University of Technology

View shared research outputs
Top Co-Authors

Avatar

Kailing Guo

South China University of Technology

View shared research outputs
Top Co-Authors

Avatar

Yinrong Wu

South China University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Haoyu Huang

South China University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge