Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Wanqing Li is active.

Publication


Featured researches published by Wanqing Li.


computer vision and pattern recognition | 2010

Action recognition based on a bag of 3D points

Wanqing Li; Zhengyou Zhang; Zicheng Liu

This paper presents a method to recognize human actions from sequences of depth maps. Specifically, we employ an action graph to model explicitly the dynamics of the actions and a bag of 3D points to characterize a set of salient postures that correspond to the nodes in the action graph. In addition, we propose a simple, but effective projection based sampling scheme to sample the bag of 3D points from the depth maps. Experimental results have shown that over 90% recognition accuracy were achieved by sampling only about 1% 3D points from the depth maps. Compared to the 2D silhouette based recognition, the recognition errors were halved. In addition, we demonstrate the potential of the bag of points posture model to deal with occlusions through simulation.


IEEE Transactions on Circuits and Systems for Video Technology | 2008

Expandable Data-Driven Graphical Modeling of Human Actions Based on Salient Postures

Wanqing Li; Zhengyou Zhang; Zicheng Liu

This paper presents a graphical model for learning and recognizing human actions. Specifically, we propose to encode actions in a weighted directed graph, referred to as action graph, where nodes of the graph represent salient postures that are used to characterize the actions and are shared by all actions. The weight between two nodes measures the transitional probability between the two postures represented by the two nodes. An action is encoded as one or multiple paths in the action graph. The salient postures are modeled using Gaussian mixture models (GMMs). Both the salient postures and action graph are automatically learned from training samples through unsupervised clustering and expectation and maximization (EM) algorithm. The proposed action graph not only performs effective and robust recognition of actions, but it can also be expanded efficiently with new actions. An algorithm is also proposed for adding a new action to a trained action graph without compromising the existing action graph. Extensive experiments on widely used and challenging data sets have verified the performance of the proposed methods, its tolerance to noise and viewpoints, its robustness across different subjects and data sets, as well as the effectiveness of the algorithm for learning new actions.


IEEE Transactions on Human-Machine Systems | 2016

Action Recognition From Depth Maps Using Deep Convolutional Neural Networks

Pichao Wang; Wanqing Li; Zhimin Gao; Jing Zhang; Chang Tang; Philip Ogunbona

This paper proposes a new method, i.e., weighted hierarchical depth motion maps (WHDMM) + three-channel deep convolutional neural networks (3ConvNets), for human action recognition from depth maps on small training datasets. Three strategies are developed to leverage the capability of ConvNets in mining discriminative features for recognition. First, different viewpoints are mimicked by rotating the 3-D points of the captured depth maps. This not only synthesizes more data, but also makes the trained ConvNets view-tolerant. Second, WHDMMs at several temporal scales are constructed to encode the spatiotemporal motion patterns of actions into 2-D spatial structures. The 2-D spatial structures are further enhanced for recognition by converting the WHDMMs into pseudocolor images. Finally, the three ConvNets are initialized with the models obtained from ImageNet and fine-tuned independently on the color-coded WHDMMs constructed in three orthogonal planes. The proposed algorithm was evaluated on the MSRAction3D, MSRAction3DExt, UTKinect-Action, and MSRDailyActivity3D datasets using cross-subject protocols. In addition, the method was evaluated on the large dataset constructed from the above datasets. The proposed method achieved 2-9% better results on most of the individual datasets. Furthermore, the proposed method maintained its performance on the large dataset, whereas the performance of existing methods decreased with the increased number of actions.


Pattern Recognition | 2016

RGB-D-based action recognition datasets

Jing Zhang; Wanqing Li; Philip Ogunbona; Pichao Wang; Chang Tang

Human action recognition from RGB-D (Red, Green, Blue and Depth) data has attracted increasing attention since the first work reported in 2010. Over this period, many benchmark datasets have been created to facilitate the development and evaluation of new algorithms. This raises the question of which dataset to select and how to use it in providing a fair and objective comparative evaluation against state-of-the-art methods. To address this issue, this paper provides a comprehensive review of the most commonly used action recognition related RGB-D video datasets, including 27 single-view datasets, 10 multi-view datasets, and 7 multi-person datasets. The detailed information and analysis of these datasets is a useful resource in guiding insightful selection of datasets for future research. In addition, the issues with current algorithm evaluation vis-a-vis limitations of the available datasets and evaluation protocols are also highlighted; resulting in a number of recommendations for collection of new datasets and use of evaluation protocols. HighlightsA detailed review and in-depth analysis of 44 publicly available RGB-D-based action datasets.Recommendations on the selection of datasets and evaluation protocols for use in future research.Identification of some limitations of these datasets and evaluation protocols.Recommendations on future creation of datasets and use of evaluation protocols.


Pattern Recognition Letters | 2008

An efficient iterative algorithm for image thresholding

Liju Dong; Ge Yu; Philip Ogunbona; Wanqing Li

Thresholding is a commonly used technique for image segmentation. This paper presents an efficient iterative algorithm for finding optimal thresholds that minimize a weighted sum-of-squared-error objective function. We have proven that the proposed algorithm is mathematically equivalent to the well-known Otsus method, but requires much less computation. The computational complexity of the proposed algorithm is linear with respect to the number of thresholds to be calculated as against the exponential complexity of the Otsus algorithm. Experimental results have verified the theoretical analysis and the efficiency of the proposed algorithm.


international conference on pattern recognition | 2006

Cryptographic Key Generation from Biometric Data Using Lattice Mapping

Gang Zheng; Wanqing Li; Ce Zhan

Crypto-biometric systems are recently emerging as an effective process of key management to address the security weakness of conventional key release systems using pass-codes, tokens or pattern recognition based biometrics. This paper presents a lattice mapping based fuzzy commitment method for cryptographic key generation from biometric data. The proposed method not only outputs high entropy keys, but also conceals the original biometric data such that it is impossible to recover the biometric data even when the stored information in the system is open to an attacker. Simulated results have demonstrated that its authentication accuracy is comparable to the well-known k-nearest neighbour classification


acm multimedia | 2016

Action Recognition Based on Joint Trajectory Maps Using Convolutional Neural Networks

Pichao Wang; Zhaoyang Li; Yonghong Hou; Wanqing Li

Recently, Convolutional Neural Networks (ConvNets) have shown promising performances in many computer vision tasks, especially image-based recognition. How to effectively use ConvNets for video-based recognition is still an open problem. In this paper, we propose a compact, effective yet simple method to encode spatio-temporal information carried in 3D skeleton sequences into multiple 2D images, referred to as Joint Trajectory Maps (JTM), and ConvNets are adopted to exploit the discriminative features for real-time human action recognition. The proposed method has been evaluated on three public benchmarks, i.e., MSRC-12 Kinect gesture dataset (MSRC-12), G3D dataset and UTD multimodal human action dataset (UTD-MHAD) and achieved the state-of-the-art results.


Pattern Recognition | 2013

A novel shape-based non-redundant local binary pattern descriptor for object detection

Duc Thanh Nguyen; Philip Ogunbona; Wanqing Li

Motivated by the discriminative ability of shape information and local patterns in object recognition, this paper proposes a window-based object descriptor that integrates both cues. In particular, contour templates representing object shape are used to derive a set of so-called key points at which local appearance features are extracted. These key points are located using an improved template matching method that utilises both spatial and orientation information in a simple and effective way. At each of the extracted key points, a new local appearance feature, namely non-redundant local binary pattern (NR-LBP), is computed. An object descriptor is formed by concatenating the NR-LBP features from all key points to encode the shape as well as the appearance of the object. The proposed descriptor was extensively tested in the task of detecting humans from static images on the commonly used MIT and INRIA datasets. The experimental results have shown that the proposed descriptor can effectively describe non-rigid objects with high articulation and improve the detection rate compared to other state-of-the-art object descriptors.


international conference on image processing | 2010

Object detection using Non-Redundant Local Binary Patterns

Duc Thanh Nguyen; Zhimin Zong; Philip Ogunbona; Wanqing Li

Local Binary Pattern (LBP) as a descriptor, has been successfully used in various object recognition tasks because of its discriminative property and computational simplicity. In this paper a variant of the LBP referred to as Non-Redundant Local Binary Pattern (NRLBP) is introduced and its application for object detection is demonstrated. Compared with the original LBP descriptor, the NRLBP has advantage of providing a more compact description of objects appearance. Furthermore, the NRLBP is more discriminative since it reflects the relative contrast between the background and foreground. The proposed descriptor is employed to encode humans appearance in a human detection task. Experimental results show that the NRLBP is robust and adaptive with changes of the background and foreground and also outperforms the original LBP in detection task.


acm multimedia | 2015

ConvNets-Based Action Recognition from Depth Maps through Virtual Cameras and Pseudocoloring

Pichao Wang; Wanqing Li; Zhimin Gao; Chang Tang; Jing Zhang; Philip Ogunbona

In this paper, we propose to adopt ConvNets to recognize human actions from depth maps on relatively small datasets based on Depth Motion Maps (DMMs). In particular, three strategies are developed to effectively leverage the capability of ConvNets in mining discriminative features for recognition. Firstly, different viewpoints are mimicked by rotating virtual cameras around subject represented by the 3D points of the captured depth maps. This not only synthesizes more data from the captured ones, but also makes the trained ConvNets view-tolerant. Secondly, DMMs are constructed and further enhanced for recognition by encoding them into Pseudo-RGB images, turning the spatial-temporal motion patterns into textures and edges. Lastly, through transferring learning the models originally trained over ImageNet for image classification, the three ConvNets are trained independently on the color-coded DMMs constructed in three orthogonal planes. The proposed algorithm was extensively evaluated on MSRAction3D, MSRAction3DExt and UTKinect-Action datasets and achieved the state-of-the-art results on these datasets.

Collaboration


Dive into the Wanqing Li's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pichao Wang

University of Wollongong

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ce Zhan

University of Wollongong

View shared research outputs
Top Co-Authors

Avatar

Farzad Safaei

University of Wollongong

View shared research outputs
Top Co-Authors

Avatar

Chang Tang

China University of Geosciences

View shared research outputs
Top Co-Authors

Avatar

Jing Zhang

University of Wollongong

View shared research outputs
Top Co-Authors

Avatar

Lei Wang

Information Technology University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge