Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Guoliang Lu is active.

Publication


Featured researches published by Guoliang Lu.


Multimedia Tools and Applications | 2016

Efficient action recognition via local position offset of 3D skeletal body joints

Guoliang Lu; Yiqi Zhou; Xueyong Li; Mineichi Kudo

To accurately recognize human actions in less computational time is one important aspect for practical usage. This paper presents an efficient framework for recognizing actions by a RGB-D camera. The novel action patterns in the framework are extracted via computing position offset of 3D skeletal body joints locally in the temporal extent of video. Action recognition is then performed by assembling these offset vectors using a bag-of-words framework and also by considering the spatial independence of body joints. We conducted extensive experiments on two benchmarking datasets: UCF dataset and MSRC-12 dataset, to demonstrate the effectiveness of the proposed framework. Experimental results suggest that the proposed framework 1) is very fast to extract action patterns and very simple in implementation; and 2) can achieve a comparable or a better performance in recognition accuracy compared with the state-of-the-art approaches.


Neurocomputing | 2014

Learning action patterns in difference images for efficient action recognition

Guoliang Lu; Mineichi Kudo

A new framework is presented for single-person oriented action recognition. This framework does not require detection/location of bounding boxes of human body nor motion estimation in each frame. The novel descriptor/pattern for action representation is learned with local temporal self-similarities (LTSSs) derived directly from difference images. The bag-of-words framework is then employed for action classification taking advantages of these descriptors. We investigated the effectiveness of the framework on two public human action datasets: the Weizmann dataset and the KTH dataset. In the Weizmann dataset, the proposed framework achieves a performance of 95.6% in the recognition rate and that of 91.1% in the KTH dataset, both of which are competitive with those of state-of-the-art approaches, but it has a high potential to achieve a faster execution performance.


Multimedia Tools and Applications | 2017

Unsupervised, efficient and scalable key-frame selection for automatic summarization of surveillance videos

Guoliang Lu; Yiqi Zhou; Xueyong Li; Peng Yan

Recent years have witnessed a dramatical growth of the deployment of vision-based surveillance in public spaces. Automatic summarization of surveillance videos (ASOSV) is hence becoming more and more desirable in many real-world applications. For this purpose, a novel frame-selection framework is proposed in the present paper, which has three properties: 1) un-supervision: it can work without requirements of any supervised learning or training; 2) efficiency: it can work very fast, with experiments demonstrating efficiency faster than real-timeness and 3) scalability: it can achieve a hierarchical analysis/overview of video content. The performance of proposed framework is systematically evaluated and compared with various state-of-the-art frame selection techniques on some collected video sequences and publicly-available ViSOR dataset. The experimental results demonstrate promising performance and good applicability for real-world problems.


Pattern Recognition Letters | 2013

Temporal segmentation and assignment of successive actions in a long-term video

Guoliang Lu; Mineichi Kudo; Jun Toyama

Temporal segmentation of successive actions in a long-term video sequence has been a long-standing problem in computer vision. In this paper, we exploit a novel learning-based framework. Given a video sequence, only a few characteristic frames are selected by the proposed selection algorithm, and then the likelihood to trained models is calculated in a pair-wise way, and finally segmentation is obtained as the optimal model sequence to realize the maximum likelihood. The average accuracy on IXMAS dataset reached to 80.5% at frame level, using only 16.5% of all frames in computation time of 1.57s per video which has 1160 frames on the average.


granular computing | 2011

Robust human pose estimation from corrupted images with partial occlusions and noise pollutions

Guoliang Lu; Mineichi Kudo; Jun Toyama

Robust human pose estimation from the given visual observations has attracted many attentions in the past two decades. However, this problem is still challenging due to the suituation that observations are often corrupted with partial occlusions or noise pollutions or both in real-world applications. In this paper, we propose to estimate human pose by using robust silhouette matching in original rectangle-coordinate space. In addition, human action model is employed to determinate reasonable matching results. Experimental results on robustness sequence of Weizman dataset reveal that our proposed approach can estimate human pose robustly and reasonably when pose observations are corrupted with partial occlusions or noise pollutions.


computer analysis of images and patterns | 2011

Hierarchical foreground detection in dynamic background

Guoliang Lu; Mineichi Kudo; Jun Toyama

Foreground detection in dynamic background is one of challenging problems in many vision-based applications. In this paper, we propose a hierarchical foreground detection algorithm in the HSL color space. With the proposed algorithm, the experimental precision in five testing sequences reached to 56.46%, which was the best among compared four methods.


machine vision applications | 2018

Key-frame selection for automatic summarization of surveillance videos: a method of multiple change-point detection

Zhen Gao; Guoliang Lu; Chen Lyu; Peng Yan

Recent years have witnessed a drastic growth of various videos in real-life scenarios, and thus there is an increasing demand for a quick view of such videos in a constrained amount of time. In this paper, we focus on automatic summarization of surveillance videos and present a new key-frame selection method for this task. We first introduce a dissimilarity measure based on f-divergence by a symmetric strategy for multiple change-point detection and then use it to segment a given video sequence into a set of non-overlapping clips. Key frames are extracted from the resulting video clips by a typical clustering procedure for final video summary. Through experiments on a wide range of testing data, excellent performances, outperforming given state-of-the-art competitors, have been demonstrated which suggests good potentials of the proposed method in real-world applications.


Frontiers in Physiology | 2018

Automatic Change Detection for Real-Time Monitoring of EEG Signals

Zhen Gao; Guoliang Lu; Peng Yan; Chen Lyu; Xueyong Li; Wei Shang; Zhaohong Xie; Wanming Zhang

In recent years, automatic change detection for real-time monitoring of electroencephalogram (EEG) signals has attracted widespread interest with a large number of clinical applications. However, it is still a challenging problem. This paper presents a novel framework for this task where joint time-domain features are firstly computed to extract temporal fluctuations of a given EEG data stream; and then, an auto-regressive (AR) linear model is adopted to model the data and temporal anomalies are subsequently calculated from that model to reflect the possibilities that a change occurs; a non-parametric statistical test based on Randomized Power Martingale (RPM) is last performed for making change decision from the resulting anomaly scores. We conducted experiments on the publicly-available Bern-Barcelona EEG database where promising results for terms of detection precision (96.97%), detection recall (97.66%) as well as computational efficiency have been achieved. Meanwhile, we also evaluated the proposed method for real detection of seizures occurrence for a monitoring epilepsy patient. The results of experiments by using both the testing database and real application demonstrated the effectiveness and feasibility of the method for the purpose of change detection in EEG signals. The proposed framework has two additional properties: (1) it uses a pre-defined AR model for modeling of the past observed data so that it can be operated in an unsupervised manner, and (2) it uses an adjustable threshold to achieve a scalable decision making so that a coarse-to-fine detection strategy can be developed for quick detection or further analysis purposes.


intelligent information hiding and multimedia signal processing | 2014

Recognizing Actions with Multi-view 2D Observations Recovered from Depth Maps

Guoliang Lu; Yiqi Zhou; Xueyong Li

Depth maps based action recognition has been received much research attention in recent years due to its robustness to environmental elements in capturing and its relatively well performance in protecting users privacy. Taking the captured sequential depth maps as inputs, we propose a framework in this paper to recognize actions from such data. We first recover multi-view 2D observations in each frame of the sequence and then accumulate them by employing motion energy image (MEI) in each observing view. Action features that combines occupancy and motion descriptors are extracted for capturing the discriminative patterns in resulted MEIs, and then fed to action modeling based on Gaussian Mixture Model (GMM) or recognition based on Bayesian theory. Experimental results on MSR Action3D dataset show a better recognition performance by the proposed framework, compared with three competitors, which reveals its effectiveness and priority in depth maps based action recognition.


Mechanical Systems and Signal Processing | 2017

A novel framework of change-point detection for machine monitoring

Guoliang Lu; Yiqi Zhou; Changhou Lu; Xueyong Li

Collaboration


Dive into the Guoliang Lu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chen Lyu

Shandong Normal University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge