Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chang Tang is active.

Publication


Featured researches published by Chang Tang.


IEEE Transactions on Human-Machine Systems | 2016

Action Recognition From Depth Maps Using Deep Convolutional Neural Networks

Pichao Wang; Wanqing Li; Zhimin Gao; Jing Zhang; Chang Tang; Philip Ogunbona

This paper proposes a new method, i.e., weighted hierarchical depth motion maps (WHDMM) + three-channel deep convolutional neural networks (3ConvNets), for human action recognition from depth maps on small training datasets. Three strategies are developed to leverage the capability of ConvNets in mining discriminative features for recognition. First, different viewpoints are mimicked by rotating the 3-D points of the captured depth maps. This not only synthesizes more data, but also makes the trained ConvNets view-tolerant. Second, WHDMMs at several temporal scales are constructed to encode the spatiotemporal motion patterns of actions into 2-D spatial structures. The 2-D spatial structures are further enhanced for recognition by converting the WHDMMs into pseudocolor images. Finally, the three ConvNets are initialized with the models obtained from ImageNet and fine-tuned independently on the color-coded WHDMMs constructed in three orthogonal planes. The proposed algorithm was evaluated on the MSRAction3D, MSRAction3DExt, UTKinect-Action, and MSRDailyActivity3D datasets using cross-subject protocols. In addition, the method was evaluated on the large dataset constructed from the above datasets. The proposed method achieved 2-9% better results on most of the individual datasets. Furthermore, the proposed method maintained its performance on the large dataset, whereas the performance of existing methods decreased with the increased number of actions.


Pattern Recognition | 2016

RGB-D-based action recognition datasets

Jing Zhang; Wanqing Li; Philip Ogunbona; Pichao Wang; Chang Tang

Human action recognition from RGB-D (Red, Green, Blue and Depth) data has attracted increasing attention since the first work reported in 2010. Over this period, many benchmark datasets have been created to facilitate the development and evaluation of new algorithms. This raises the question of which dataset to select and how to use it in providing a fair and objective comparative evaluation against state-of-the-art methods. To address this issue, this paper provides a comprehensive review of the most commonly used action recognition related RGB-D video datasets, including 27 single-view datasets, 10 multi-view datasets, and 7 multi-person datasets. The detailed information and analysis of these datasets is a useful resource in guiding insightful selection of datasets for future research. In addition, the issues with current algorithm evaluation vis-a-vis limitations of the available datasets and evaluation protocols are also highlighted; resulting in a number of recommendations for collection of new datasets and use of evaluation protocols. HighlightsA detailed review and in-depth analysis of 44 publicly available RGB-D-based action datasets.Recommendations on the selection of datasets and evaluation protocols for use in future research.Identification of some limitations of these datasets and evaluation protocols.Recommendations on future creation of datasets and use of evaluation protocols.


acm multimedia | 2015

ConvNets-Based Action Recognition from Depth Maps through Virtual Cameras and Pseudocoloring

Pichao Wang; Wanqing Li; Zhimin Gao; Chang Tang; Jing Zhang; Philip Ogunbona

In this paper, we propose to adopt ConvNets to recognize human actions from depth maps on relatively small datasets based on Depth Motion Maps (DMMs). In particular, three strategies are developed to effectively leverage the capability of ConvNets in mining discriminative features for recognition. Firstly, different viewpoints are mimicked by rotating virtual cameras around subject represented by the 3D points of the captured depth maps. This not only synthesizes more data from the captured ones, but also makes the trained ConvNets view-tolerant. Secondly, DMMs are constructed and further enhanced for recognition by encoding them into Pseudo-RGB images, turning the spatial-temporal motion patterns into textures and edges. Lastly, through transferring learning the models originally trained over ImageNet for image classification, the three ConvNets are trained independently on the color-coded DMMs constructed in three orthogonal planes. The proposed algorithm was extensively evaluated on MSRAction3D, MSRAction3DExt and UTKinect-Action datasets and achieved the state-of-the-art results on these datasets.


international conference on computer vision | 2015

Beyond Covariance: Feature Representation with Nonlinear Kernel Matrices

Lei Wang; Jianjia Zhang; Luping Zhou; Chang Tang; Wanqing Li

Covariance matrix has recently received increasing attention in computer vision by leveraging Riemannian geometry of symmetric positive-definite (SPD) matrices. Originally proposed as a region descriptor, it has now been used as a generic representation in various recognition tasks. However, covariance matrix has shortcomings such as being prone to be singular, limited capability in modeling complicated feature relationship, and having a fixed form of representation. This paper argues that more appropriate SPD-matrix-based representations shall be explored to achieve better recognition. It proposes an open framework to use the kernel matrix over feature dimensions as a generic representation and discusses its properties and advantages. The proposed framework significantly elevates covariance representation to the unlimited opportunities provided by this new representation. Experimental study shows that this representation consistently outperforms its covariance counterpart on various visual recognition tasks. In particular, it achieves significant improvement on skeleton-based human action recognition, demonstrating the state-of-the-art performance over both the covariance and the existing non-covariance representations.


international conference on pattern recognition | 2016

Large-scale Isolated Gesture Recognition using Convolutional Neural Networks

Pichao Wang; Wanqing Li; Song Liu; Zhimin Gao; Chang Tang; Philip Ogunbona

This paper proposes three simple, compact yet effective representations of depth sequences, referred to respectively as Dynamic Depth Images (DDI), Dynamic Depth Normal Images (DDNI) and Dynamic Depth Motion Normal Images (DDMNI). These dynamic images are constructed from a sequence of depth maps using bidirectional rank pooling to effectively capture the spatial-temporal information. Such image-based representations enable us to fine-tune the existing ConvNets models trained on image data for classification of depth sequences, without introducing large parameters to learn. Upon the proposed representations, a convolutional Neural networks (ConvNets) based method is developed for gesture recognition and evaluated on the Large-scale Isolated Gesture Recognition at the ChaLearn Looking at People (LAP) challenge 2016. The method achieved 55.57% classification accuracy and ranked 2nd place in this challenge but was very close to the best performance even though we only used depth data.


IEEE Signal Processing Letters | 2017

Salient Object Detection via Weighted Low Rank Matrix Recovery

Chang Tang; Pichao Wang; Changqing Zhang; Wanqing Li

Image-based salient object detection is a useful and important technique, which can promote the efficiency of several applications such as object detection, image classification/retrieval, object co-segmentation, and content-based image editing. In this letter, we present a novel weighted low-rank matrix recovery (WLRR) model for salient object detection. In order to facilitate efficient salient objects-background separation, a high-level background prior map is estimated by employing the property of the color, location, and boundary connectivity, and then this prior map is ensembled into a weighting matrix which indicates the likelihood that each image region belongs to the background. The final salient object detection task is formulated as the WLRR model with the weighting matrix. Both quantitative and qualitative experimental results on three challenging datasets show competitive results as compared with 24 state-of-the-art methods.


IEEE Signal Processing Letters | 2016

A Spectral and Spatial Approach of Coarse-to-Fine Blurred Image Region Detection

Chang Tang; Jin Wu; Yonghong Hou; Pichao Wang; Wanqing Li

Blur exists in many digital images, it can be mainly categorized into two classes: defocus blur which is caused by optical imaging systems and motion blur which is caused by the relative motion between camera and scene objects. In this letter, we propose a simple yet effective automatic blurred image region detection method. Based on the observation that blur attenuates high-frequency components of an image, we present a blur metric based on the log averaged spectrum residual to get a coarse blur map. Then, a novel iterative updating mechanism is proposed to refine the blur map from coarse to fine by exploiting the intrinsic relevance of similar neighbor image regions. The proposed iterative updating mechanism can partially resolve the problem of differentiating an in-focus smooth region and a blurred smooth region. In addition, our iterative updating mechanism can be integrated into other image blurred region detection algorithms to refine the final results. Both quantitative and qualitative experimental results demonstrate that our proposed method is more reliable and efficient compared to various state-of-the-art methods.


Knowledge Based Systems | 2018

Robust unsupervised feature selection via dual self-representation and manifold regularization

Chang Tang; Xinwang Liu; Miaomiao Li; Pichao Wang; Jiajia Chen; Lizhe Wang; Wanqing Li

Abstract Unsupervised feature selection has become an important and challenging pre-processing step in machine learning and data mining since large amount of unlabelled high dimensional data are often required to be processed. In this paper, we propose an efficient method for robust unsupervised feature selection via dual self-representation and manifold regularization, referred to as DSRMR briefly. On the one hand, a feature self-representation term is used to learn the feature representation coefficient matrix to measure the importance of different feature dimensions. On the other hand, a sample self-representation term is used to automatically learn the sample similarity graph to preserve the local geometrical structure of data which has been verified critical in unsupervised feature selection. By using l2,1-norm to regularize the feature representation residual matrix and representation coefficient matrix, our method is robustness to outliers, and the row sparsity of the feature coefficient matrix induced by l2,1-norm can effectively select representative features. During the optimization process, the feature coefficient matrix and sample similarity graph constrain each other to obtain optimal solution. Experimental results on ten real-world data sets demonstrate that the proposed method can effectively identify important features, outperforming many state-of-the-art unsupervised feature selection methods in terms of clustering accuracy (ACC) and normalized mutual information (NMI).


Information Sciences | 2018

Online human action recognition based on incremental learning of weighted covariance descriptors

Chang Tang; Wanqing Li; Pichao Wang; Lizhe Wang

Abstract Different from traditional action recognition based on video segments, online action recognition aims to recognize actions from an unsegmented stream of data in a continuous manner. One approach to online recognition is based on accumulation of evidence over time. This paper presents an effective framework of such an approach to online action recognition from a stream of noisy skeleton data, using a weighted covariance descriptor as a means to accumulate information. In particular, a fast incremental updating method for the weighted covariance descriptor is developed. The weighted covariance descriptor takes the following principles into consideration: past frames have less contribution to the accumulated evidence and recent and informative frames such as key frames contribute more. To determine the discriminativeness of each frame, an effective pseudo-neutral pose is proposed to recover the neutral pose from an arbitrary pose in a frame. Two recognition methods are developed using the weighted covariance descriptor. The first method applies nearest neighbor search in a set of trained actions using a Riemannian metric of covariance matrices. The second method uses a Log-Euclidean kernel based SVM. Extensive experiments on MSRC-12 Kinect Gesture dataset, Online RGBD Action dataset, and our newly collected online action recognition dataset have demonstrated the efficacy of the proposed framework in terms of latency, miss rate and error rate.


Expert Systems With Applications | 2018

Robust Graph Regularized Unsupervised Feature Selection

Chang Tang; Xinzhong Zhu; Jiajia Chen; Pichao Wang; Xinwang Liu; Jie Tian

Abstract Recent research indicates the critical importance of preserving local geometric structure of data in unsupervised feature selection (UFS), and the well studied graph Laplacian is usually deployed to capture this property. By using a squared l2-norm, we observe that conventional graph Laplacian is sensitive to noisy data, leading to unsatisfying data processing performance. To address this issue, we propose a unified UFS framework via feature self-representation and robust graph regularization, with the aim at reducing the sensitivity to outliers from the following two aspects: i) an l2, 1-norm is used to characterize the feature representation residual matrix; and ii) an l1-norm based graph Laplacian regularization term is adopted to preserve the local geometric structure of data. By this way, the proposed framework is able to reduce the effect of noisy data on feature selection. Furthermore, the proposed l1-norm based graph Laplacian is readily extendible, which can be easily integrated into other UFS methods and machine learning tasks with local geometrical structure of data being preserved. As demonstrated on ten challenging benchmark data sets, our algorithm significantly and consistently outperforms state-of-the-art UFS methods in the literature, suggesting the effectiveness of the proposed UFS framework.

Collaboration


Dive into the Chang Tang's collaboration.

Top Co-Authors

Avatar

Pichao Wang

University of Wollongong

View shared research outputs
Top Co-Authors

Avatar

Wanqing Li

University of Wollongong

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jing Zhang

University of Wollongong

View shared research outputs
Top Co-Authors

Avatar

Zhimin Gao

University of Wollongong

View shared research outputs
Top Co-Authors

Avatar

Jiajia Chen

Xuzhou Medical College

View shared research outputs
Top Co-Authors

Avatar

Xinwang Liu

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Song Liu

University of Wollongong

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lizhe Wang

China University of Geosciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge