Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zhanpeng Shao is active.

Publication


Featured researches published by Zhanpeng Shao.


Pattern Recognition | 2015

Integral invariants for space motion trajectory matching and recognition

Zhanpeng Shao; Youfu Li

Motion trajectories provide a key and informative clue in motion characterization of humans, robots and moving objects. In this paper, we propose some new integral invariants for space motion trajectories, which benefit effective motion trajectory matching and recognition. Integral invariants are defined as the line integrals of a class of kernel functions along a motion trajectory. A robust estimation of the integral invariants is formulated based on the blurred segment of noisy discrete curve. Then a non-linear distance of the integral invariants is defined to measure the similarity for trajectory matching and recognition. Such integral invariants, in addition to being invariant to transformation groups, have some desirable properties such as noise insensitivity, computational locality, and uniqueness of representation. Experimental results on trajectory matching and sign recognition show the effectiveness and robustness of the proposed integral invariants in motion trajectory matching and recognition. A meaningful definition of integral invariants for space trajectories is proposed.We derive a novel estimation method of integral invariants for discrete trajectories.A non-linear distance is defined to measure the similarity between trajectories.Integral invariants show robustness to noise, occlusions, and transformations.Experiments are carried out to examine their effectiveness and robustness in applications.


international conference on robotics and automation | 2013

A new descriptor for multiple 3D motion trajectories recognition

Zhanpeng Shao; Youfu Li

Motion trajectory gives a meaningful and informative clue in characterizing the motions of human, robots or moving objects. Hence, the descriptor for motion trajectory plays an importance role in motion recognition for many robotic tasks. However, an effective and compact descriptor for multiple 3D motion trajectories under complicated situation is lacking. In this paper, we propose a novel invariant descriptor for multiple motion trajectories based on the kinematic relation among multiple moving parts. There are two kinds of kinematic relation among multiple trajectories: articulated and independent trajectories. Spherical coordinate system is introduced to get a uniform and compact representation for both kinds of trajectories, where the relative trajectory concept are firstly defined based on orientation and distance changes in favor of acquiring relative movement features of each child trajectory with respect to the root trajectory. Then, by incorporating both the differential invariants of root trajectory and orientation, distance variations of each relative trajectory respectively, the new descriptor is constructed. Finally, effectiveness and robustness of our proposed new descriptor for multiple trajectories under complex circumstance are validated by the conducted two experiments for sign language and human action recognition.


Pattern Recognition | 2018

DSRF: A flexible trajectory descriptor for articulated human action recognition

Yao Guo; Youfu Li; Zhanpeng Shao

Abstract In this paper, we propose a novel skeletal representation that models the human body as the articulated interconnections of multiple rigid bodies. An articulated human action can be viewed as the combination of multiple rigid body motion trajectories. The Dual Square-Root Function (DSRF) descriptor is firstly introduced by calculating the gradient-based invariants for representing 6-D rigid body motion trajectories, which can offer substantial advantages over raw data. To effectively incorporate the DSRF descriptors in the skeletal representation, a skeleton is decomposed into five parts and the most informative part method is raised for offering compact representation. Besides, two rigid body configurations are investigated for representing the movement of each part. In the recognition stage, we first follow the simple template matching strategy with the nearest neighbor classifier. A robust distance measure between two skeletal representations is designed. In addition, the DSRF-based skeletal representation can be encoded as a sparse histogram by the bag-of-words approach. Then, a support vector machine classifier with chi-square kernel is trained for multiclass recognition tasks. Experimental results on three benchmark datasets demonstrate that our proposed approach outperforms existing skeleton representations in terms of recognition accuracy.


IEEE Transactions on Systems, Man, and Cybernetics | 2018

RRV: A Spatiotemporal Descriptor for Rigid Body Motion Recognition

Yao Guo; Youfu Li; Zhanpeng Shao

The motion behaviors of a rigid body can be characterized by a six degrees of freedom motion trajectory, which contains the 3-D position vectors of a reference point on the rigid body and 3-D rotations of this rigid body over time. This paper devises a rotation and relative velocity (RRV) descriptor by exploring the local translational and rotational invariants of rigid body motion trajectories, which is insensitive to noise, invariant to rigid transformation and scale. The RRV descriptor is then applied to characterize motions of a human body skeleton modeled as articulated interconnections of multiple rigid bodies. To show the descriptive ability of our RRV descriptor, we explore its potentials and applications in different rigid body motion recognition tasks. The experimental results on benchmark datasets demonstrate that our RRV descriptor learning discriminative motion patterns can achieve superior results for various recognition tasks.


international conference on mechatronics and automation | 2016

DSRF: A flexible descriptor for effective rigid body motion trajectory recognition

Yao Guo; Youfu Li; Zhanpeng Shao

Rigid body motion trajectories can provide sufficient clues in understanding motion behaviors of objects of interest. An invariant descriptor for a motion trajectory can offer substantial advantages over raw data. This paper firstly proposes a Dual Square-Root Function (DSRF) descriptor by only calculating gradient-based shape features of normalized rigid body motion trajectories, while high-order time derivatives are involved in previous works. Our DSRF descriptor has shown richness in description, moreover, it is invariant to scaling, rigid transformation, robust to noise and beneficial for matching rate-variance trajectories. To illustrate these, we then evaluate DSRF descriptor for different trajectory-based rigid body motion recognition tasks. Experimental results on two benchmark datasets demonstrate that it outperforms previous ones in terms of the recognition accuracy and robustness.


international conference on mechatronics and automation | 2014

Multiscale integral invariant for motion trajectory matching and recognition

Zhanpeng Shao; Youfu Li

Motion trajectory obtained from visual tracking provides an important clue to help understand motion content. This paper presents a multiscale integral invariant for motion trajectory representation which can be input to a classifier performing motion retrieve, action and gesture recognition. A meaningful integral invariant for motion trajectory under group transformations is first defined on progression of the Frenet-Serret frame with dynamic integral domain that is defined and bounded by the ball kernel function. The corresponding estimation approach is then investigated based on blurred segment of noise discrete curve. Accordingly we develop a multiscale representation of the proposed integral invariant in terms of varying scale radius of ball kernel function, by which the features of motion trajectory can be perceived at multiscale levels in coarse-to-fine manner. Through the experiments, we examine the robustness and effectiveness of our proposed representation being able to capture the motion cues in trajectory matching and gesture recognition. This multiscale integral invariant also benefits the shape representation and matching in both planar and 3D objects recognition.


pacific rim international conference on artificial intelligence | 2018

Deep CRF-Graph Learning for Semantic Image Segmentation

Fuguang Ding; Zhenhua Wang; Dongyan Guo; Shengyong Chen; Jianhua Zhang; Zhanpeng Shao

We show that conditional random fields (CRFs) with learned heterogeneous graphs outperforms its pre-designated homogeneous counterparts with heuristics. Without introducing any additional annotations, we utilize four deep convolutional neural networks (CNNs) to learn the connections of one pixel to its left, top, upper-left, upper-right neighbors. The results are then fused to obtain the super-pixel-level CRF graphs. The model parameters of CRFs are learned via minimizing the negative pseudo-log-likelihood of the potential function. Our results show that the learned graph delivers significantly better segmentation results than CRFs with pre-designated graphs, and achieves state-of-the-art performance when combining with CNN features.


Neurocomputing | 2018

Understanding human activities in videos: A joint action and interaction learning approach

Zhenhua Wang; Jiali Jin; Tong Liu; Sheng Liu; Jianhua Zhang; Shengyong Chen; Zhen Zhang; Dongyan Guo; Zhanpeng Shao

Abstract In video surveillance with multiple people, human interactions and their action categories preserve strong correlations, and the identification of interaction configuration is of significant importance to the success of action recognition task. Interactions are typically estimated using heuristics or treated as latent variables. However, the former usually introduces incorrect interaction configuration while the latter amounts to solve challenging optimization problems. Here we address these problems systematically by proposing a novel structured learning framework which enables the joint prediction of actions and interactions. To this end, both the features learned via deep nets and human interaction context are leveraged to encode the correlations among actions and pairwise interactions in a structured model, and all model parameters are trained via a large-margin framework. To solve the associated inference problem, we present two optimization algorithms, one is alternating search and the other is belief propagation. Experiments on both synthetic and real dataset demonstrate the strength of the proposed approach.


IEEE Transactions on Industrial Informatics | 2017

On Multiscale Self-Similarities Description for Effective Three-Dimensional/Six-Dimensional Motion Trajectory Recognition

Yao Guo; Youfu Li; Zhanpeng Shao

Motion trajectories provide compact informative clues in characterizing motion behaviors of human bodies, robots, and moving objects. This paper devises an invariant and unified descriptor for three-dimensional/six-dimensional (3-D/6-D) motion trajectories recognition by exploring the latent motion patterns in the multiscale self-similarity matrices (MSM) within a motion trajectory and its components. The MSM approach transforms a motion trajectory in Euclidean space into a set of similarity matrices and exhibits strong invariances, in which each matrix can be regarded as a grayscale image. Next, the histograms of oriented gradients features extracted from the MSM representation are concatenated as the final trajectory descriptor. In addition, an improved kernel MSM is raised by calculating the pairwise kernel distances. Finally, extensive 3-D/6-D motion trajectory recognition experiments on three public datasets with a linear support vector machine classifier are conducted to verify the effectiveness and efficiency of the proposed approach.


international conference on control, automation, robotics and vision | 2014

Online visual object tracking with supervised sparse representation and learning

Tianxiang Bai; Youfu Li; Zhanpeng Shao

In this paper, an online visual object tracking algorithm based on the discriminative sparse representation framework with supervised learning is proposed. Different from the generative sparse representation based tracking algorithms, the proposed method casts the tracking problem into a binary classification task. A linear classifier is embedded into the sparse representation model by incorporating the classification error into the objective function to achieve discriminative classification. The dictionary and the classifier are jointly trained using the online dictionary learning algorithm, thus allow the model can adapt the dynamic variations of target appearance and background environment. The target locations are updated based on the classification score and the greedy search motion model. The proposed method is evaluated using four benchmark datasets and is compared with three state-of-the-art tracking algorithms. The results show that the discriminative sparse representation facilitates the tracking performance.

Collaboration


Dive into the Zhanpeng Shao's collaboration.

Top Co-Authors

Avatar

Youfu Li

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Yao Guo

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Shengyong Chen

Tianjin University of Technology

View shared research outputs
Top Co-Authors

Avatar

Zhenhua Wang

Zhejiang University of Technology

View shared research outputs
Top Co-Authors

Avatar

Dongyan Guo

Nanjing University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Jianhua Zhang

Zhejiang University of Technology

View shared research outputs
Top Co-Authors

Avatar

Xiaolong Zhou

Zhejiang University of Technology

View shared research outputs
Top Co-Authors

Avatar

Tianxiang Bai

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Chengbin Huang

Zhejiang University of Technology

View shared research outputs
Top Co-Authors

Avatar

Fuguang Ding

Zhejiang University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge