Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tianxiang Bai is active.

Publication


Featured researches published by Tianxiang Bai.


Pattern Recognition | 2012

Robust visual tracking with structured sparse representation appearance model

Tianxiang Bai; Youfu Li

In this paper, we present a structured sparse representation appearance model for tracking an object in a video system. The mechanism behind our method is to model the appearance of an object as a sparse linear combination of structured union of subspaces in a basis library, which consists of a learned Eigen template set and a partitioned occlusion template set. We address this structured sparse representation framework that preferably matches the practical visual tracking problem by taking the contiguous spatial distribution of occlusion into account. To achieve a sparse solution and reduce the computational cost, Block Orthogonal Matching Pursuit (BOMP) is adopted to solve the structured sparse representation problem. Furthermore, aiming to update the Eigen templates over time, the incremental Principal Component Analysis (PCA) based learning scheme is applied to adapt the varying appearance of the target online. Then we build a probabilistic observation model based on the approximation error between the recovered image and the observed sample. Finally, this observation model is integrated with a stochastic affine motion model to form a particle filter framework for visual tracking. Experiments on some publicly available benchmark video sequences demonstrate the advantages of the proposed algorithm over other state-of-the-art approaches.


IEEE Transactions on Industrial Informatics | 2014

GM-PHD-Based Multi-Target Visual Tracking Using Entropy Distribution and Game Theory

Xiaolong Zhou; Youfu Li; Bingwei He; Tianxiang Bai

Tracking multiple moving targets in a video is a challenge because of several factors, including noisy video data, varying number of targets, and mutual occlusion problems. The Gaussian mixture probability hypothesis density (GM-PHD) filter, which aims to recursively propagate the intensity associated with the multi-target posterior density, can overcome the difficulty caused by the data association. This paper develops a multi-target visual tracking system that combines the GM-PHD filter with object detection. First, a new birth intensity estimation algorithm based on entropy distribution and coverage rate is proposed to automatically and accurately track the newborn targets in a noisy video. Then, a robust game-theoretical mutual occlusion handling algorithm with an improved spatial color appearance model is proposed to effectively track the targets in mutual occlusion. The spatial color appearance model is improved by incorporating interferences of other targets within the occlusion region. Finally, the experiments conducted on publicly available videos demonstrate the good performance of the proposed visual tracking system.


IEEE Transactions on Systems, Man, and Cybernetics | 2015

Learning Local Appearances With Sparse Representation for Robust and Fast Visual Tracking

Tianxiang Bai; Youfu Li; Xiaolong Zhou

In this paper, we present a novel appearance model using sparse representation and online dictionary learning techniques for visual tracking. In our approach, the visual appearance is represented by sparse representation, and the online dictionary learning strategy is used to adapt the appearance variations during tracking. We unify the sparse representation and online dictionary learning by defining a sparsity consistency constraint that facilitates the generative and discriminative capabilities of the appearance model. An elastic-net constraint is enforced during the dictionary learning stage to capture the characteristics of the local appearances that are insensitive to partial occlusions. Hence, the target appearance is effectively recovered from the corruptions using the sparse coefficients with respect to the learned sparse bases containing local appearances. In the proposed method, the dictionary is undercomplete and can thus be efficiently implemented for tracking. Moreover, we employ a median absolute deviation based robust similarity metric to eliminate the outliers and evaluate the likelihood between the observations and the model. Finally, we integrate the proposed appearance model with the particle filter framework to form a robust visual tracking algorithm. Experiments on benchmark video sequences show that the proposed appearance model outperforms the other state-of-the-art approaches in tracking performance.


IEEE Transactions on Industrial Informatics | 2014

Robust Visual Tracking Using Flexible Structured Sparse Representation

Tianxiang Bai; Youfu Li

In this work, we propose a robust and flexible appearance model based on the structured sparse representation framework. In our method, we model the complex nonlinear appearance manifold and the occlusion as a sparse linear combination of structured union of subspaces in a basis library, which consists of multiple incremental learned target subspaces and a partitioned occlusion template set. In order to enhance the discriminative power of the model, a number of clustered background subspaces are also added into the basis library and updated during tracking. With the Block Orthogonal Matching Pursuit (BOMP) algorithm, we show that the new flexible structured sparse representation based appearance model facilitates the tracking performance compared with the prototype structured sparse representation model and other state of the art tracking algorithms.


international conference on robotics and automation | 2011

Structured sparse representation appearance model for robust visual tracking

Tianxiang Bai; Youfu Li; Yazhe Tang

We propose a robust visual tracker based on structured sparse representation appearance model. The appearance of tracking target is modeled as a sparse linear combination of Eigen templates plus a sparse error due to occlusions. We address the structured sparse representation that preferably matches the practical visual tracking problem by taking the contiguous spatial distribution of occlusion into account. The sparsity is achieved by Block Orthogonal Matching Pursuit (BOMP) for solving structured sparse representation problem more efficiently. The model update scheme, based on incremental Singular Value Decomposition (SVD), guarantees the Eigen templates that are able to capture the variations of target appearance online. Then the approximation error is adopted to build a probabilistic observation model that integrates with a stochastic affine motion model to form a particle filter framework for visual tracking. Thanks to the block structure of sparse representation and BOMP, our proposed tracker demonstrates superiority on both efficiency and robustness improvement in comparison experiments with publicly available benchmark video sequences.


international conference on information and automation | 2011

Human tracking in thermal catadioptric omnidirectional vision

Yazhe Tang; Youfu Li; Tianxiang Bai; Xiaolong Zhou; Zhongwei Li

We propose to explore a novel tracking system for human tracking in thermal catadioptric omnidirectional (TCO) vision, which is able to realize the surveillance in all-weather and wide field of view conditions. In contrast, previous human tracking system mainly focuses on tracking in conventional imaging system. In this paper, the proposed tracking method adopts the classification posterior probability of Support Vector Machine (SVM) to relate the observation likelihood of particle filter for efficient tracking. However, previous works only employ the final output label of SVM for classification. Due to no existing TCO vision dataset available in public, we establish a dataset including TCO videos and extracted human samples to train the classifier and test the proposed tracking method. Moreover, we adjust tracking window distribution of particle filter to fit the characteristic of catadioptric omnidirectional vision which is the size of target in omni-image depends on the distance between target image and the center of catadioptric omnidirectional image. Finally, the experimental results show that our proposed tracking method has a stable and good performance in TCO vision tracking system.


intelligent robots and systems | 2012

Robust and fast visual tracking using constrained sparse coding and dictionary learning

Tianxiang Bai; Youfu Li; Xiaolong Zhou

We present a novel appearance model using sparse coding with online sparse dictionary learning techniques for robust visual tracking. In the proposed appearance model, the target appearance is modeled via online sparse dictionary learning technique with an “elastic-net constraint”. This scheme allows us to capture the characteristics of the target local appearance, and promotes the robustness against partial occlusions during tracking. Additionally, we unify the sparse coding and online dictionary learning by defining a “sparsity consistency constraint” that facilitates the generative and discriminative capabilities of the appearance model. Moreover, we propose a robust similarity metric that can eliminate the outliers from the corrupted observations. We then integrate the proposed appearance model with the particle filter framework to form a robust visual tracking algorithm. Experiments on publicly available benchmark video sequences demonstrate that the proposed appearance model improves the tracking performance compared with other state-of-the-art approaches.


intelligent robots and systems | 2013

Multi-target visual tracking with game theory-based mutual occlusion handling

Xiaolong Zhou; Youfu Li; Bingwei He; Tianxiang Bai

Tracking multiple moving targets in video is still a challenge because of mutual occlusion problem. This paper presents a Gaussian mixture probability hypothesis density-based visual tracking system with game theory-based mutual occlusion handling. First, a two-step occlusion reasoning algorithm is proposed to determine the occlusion region. Then, the spatial constraint-based appearance model with other interacting targets¶ interferences is modeled. Finally, an n-person, non-zero-sum, non-cooperative game is constructed to handle the mutual occlusion problem. The individual measurements within the occlusion region are regarded as the players in the constructed game competing for the maximum utilities by using the certain strategies. The Nash Equilibrium of the game is the optimal estimation of the locations of the players within the occlusion region. Experiments conducted on publicly available videos demonstrate the good performance of the proposed occlusion handling algorithm.


robotics and biomimetics | 2012

Discriminative sparse representation for online visual object tracking

Tianxiang Bai; Youfu Li; Xiaolong Zhou

In this paper, we present an online visual object tracking algorithm based on the discriminative sparse representation framework. Unlike the generative sparse representation based tracking algorithms, the proposed method casts the tracking problem into a binary classification task. To achieve discriminative classification, a linear classifier is embedded into the sparse representation model by incorporating the classification error into the objective function. The dictionary and the classifier are jointly trained using the online dictionary learning algorithm, thus allow the model can adapt the dynamic variations of target appearance and background environment. The target locations are updated based on the classification score and the greedy search motion model. We evaluate the proposed method using five benchmark datasets with detailed comparison to three state-of-the-art tracking algorithms. Both the qualitative and quantitative experimental results show that the discriminative sparse representation facilitates the tracking performance.


Advanced Robotics | 2014

Monocular human motion tracking with discriminative sparse representation

Tianxiang Bai; Youfu Li; Xiaolong Zhou

In this work, we address the problem of monocular tracking the human motion based on the discriminative sparse representation. The proposed method jointly trains the dictionary and the discriminative linear classifier to separate the human being from the background. We show that using the online dictionary learning, the tracking algorithm can adapt the variation of human appearance and background environment. We compared the proposed method with four state-of-the-art tracking algorithms on eight benchmark video clips (Faceocc, Sylv, David, Singer, Girl, Ballet, OneLeaveShopReenter2cor, and ThreePastShop2cor). Qualitative and quantitative experimental validation results are discussed at length. The proposed algorithm for human tracking achieves superior tracking results, and a Matlab run time on a standard desktop machine of four frames per second. Graphical Abstract

Collaboration


Dive into the Tianxiang Bai's collaboration.

Top Co-Authors

Avatar

Youfu Li

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Xiaolong Zhou

Zhejiang University of Technology

View shared research outputs
Top Co-Authors

Avatar

Yazhe Tang

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zhanpeng Shao

Zhejiang University of Technology

View shared research outputs
Top Co-Authors

Avatar

Zhongwei Li

Huazhong University of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge