Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Pengfei Shi is active.

Publication


Featured researches published by Pengfei Shi.


international conference on acoustics, speech, and signal processing | 2007

Object Tracking using Incremental 2D-PCA Learning and ML Estimation

Tiesheng Wang; Irene Yu-Hua Gu; Pengfei Shi

Video surveillance has drawn increasing interests in recent years. This paper addresses the issue of moving object tracking from videos. A two-step processing procedure is proposed: an incremental 2DPCA (two-dimensional principal component analysis)-based method for characterizing objects given the tracked regions, and a ML (maximum likelihood) blob-tracking process given the object characterization and the previous blob sequence. The proposed incremental 2DPCA updates the row- and column-projected covariance matrices recursively, and is computationally more efficient for online learning of dynamic objects. The proposed ML blob-tracking takes into account both the shape information and object characteristics. Tests and evaluations were performed on indoor and outdoor image sequences containing a range of single moving object in dynamic backgrounds, which have shown good tracking results. Comparisons with the method using the conventional PCA were also made.


advances in multimedia | 2006

An eigenbackground subtraction method using recursive error compensation

Zhifei Xu; Pengfei Shi; Irene Yu-Hua Gu

Eigenbackground subtraction is a commonly used method for moving object detection. The method uses the difference between an input image and the reconstructed background image for detecting foreground objects based on eigenvalue decomposition. In the method, foreground regions are represented in the reconstructed image using eigenbackground in the sense of least mean squared error minimisation. This results in errors that are spread over the entire reconstructed reference image. This will also result in degradation of quality of reconstructed background leading to inaccurate moving object detection. In order to compensate these regions, an efficient method is proposed by using recursive error compensation and an adaptively computed threshold. Experiments were conducted on a range of image sequences with variety of complexity. Performance were evaluated both qualitatively and quantitatively. Comparisons made with two existing methods have shown better approximations of the background images and more accurate detection of foreground objects have been achieved by the proposed method.


international conference on image processing | 2008

Face tracking using Rao-Blackwellized particle filter and pose-dependent probabilistic PCA

Tiesheng Wang; Irene Yu-Hua Gu; Andrew G. Backhouse; Pengfei Shi

This paper deals with tracking of face blobs containing pose changes. We propose a novel tracking method to deal with face pose changes during the tracking. In the method, tracking is formulated as an approximate solution to the MAP estimate of the state vector, consisting of a linear and a nonlinear part. Multi-pose face appearances are described by local linear models, each being related to a single pose and estimated by probabilistic PCA (PPCA). A Markov model with pose indices as its states is used to model the transitions between poses. Shape and locations of face blobs and the associated pose indices are assumed to be nonlinear, and are estimated by a Rao-Blackwellized particle filter (RBPF). This enables a separate estimation of the linear state vector through marginalizing the joint probability. The proposed method has been tested for videos containing frequent face pose changes and large illumination variations, where 5 poses (left, frontal, right, up, down) were modeled. The tracking results are shown to be robust to variable speed of pose changes and with relatively tight boxes.


Optical Engineering | 2008

Recursive error-compensated dynamic eigenbackground learning and adaptive background subtraction in video

Zhifei Xu; Irene Yu-Hua Gu; Pengfei Shi

We address the problem of foreground object detection through background subtraction. Although eigenbackground models are successful in many computer vision applications, background subtraction methods based on a conventional eigenbackground method may suffer from high false-alarm rates in the foreground detection due to possible absorption of foreground changes into the eigenbackground model. This paper introduces an improved eigenbackground modeling method for videos by recursively applying an error compensation process to reduce the influence of foreground moving objects on the eigenbackground model. An adaptive threshold method is also introduced for background subtraction, where the threshold is determined by combining a fixed global threshold and a variable local threshold. A fast algorithm is then given as an approximation to the proposed method by imposing and exploiting a constraint on motion consistency, leading to about 50% reduction in computations. Experiments have been performed on a range of videos with satisfactory results. Performance is evaluated using an objective criterion. Comparisons are made with two existing methods.


international conference on multimedia and expo | 2009

Adaptive particle filters for visual object tracking using joint PCA appearance model and consensus point correspondences

Tiesheng Wang; Irene Yu-Hua Gu; Zulfiqar Hasan Khan; Pengfei Shi

This paper addresses issues on moving object tracking from videos. We propose a novel tracking scheme that jointly exploits local object features using consensus point correspondences, and global object appearance and shape models using adaptive particle filter-based eigen-tracking. The paper include the following main novelties: (a) employ consensus feature point correspondences to estimate the motion vector of shape model; (b) employ adaptive particle filters and motioncorrected state vector for joint appearance- and shape-based eigen-tracking. An adaptive number of particles is chosen automatically based on an updated estimation of covariancematrix. Further, online learning is made adaptive to avoid learning using partially-occluded objects. The proposed scheme is realized by integrating SURF and RANSAC [8, 9] for estimating consensus point correspondences, and modify an existing particle filter-based eigen-tracking [4]. Experiment results on tracking moving objects in videos have shown that the proposed scheme provides more accurate tracking, especially for objects with fast motion or long-term partial occlusions. The average number of particles is significantly reduced. Comparisons have been made with an existing method, results have shown that the proposed scheme has provided an improved tracking accuracy at the cost of more computations.


chinese conference on biometric recognition | 2004

An iris segmentation procedure for iris recognition

Xiaoyan Yuan; Pengfei Shi

Iris segmentation is a critical stage in the whole iris recognition process In this paper, a procedure of iris segmentation is presented which was designed on the basis of the natural properties of the iris The proposed procedure consists of two main steps: circles localization and non-iris region detection In our method, we took into consideration of some typical problems most likely to appear in practice And experiments show that our method can achieve good results and robust as well.


international conference on image processing | 2015

Sparse coding-based spatiotemporal saliency for action recognition

Tao Zhang; Long Xu; Jie Yang; Pengfei Shi; Wenjing Jia

In this paper, we address the problem of human action recognition by representing image sequences as a sparse collection of patch-level spatiotemporal events that are salient in both space and time domain. Our method uses a multi-scale volumetric representation of video and adaptively selects an optimal space-time scale under which the saliency of a patch is most significant. The input image sequences are first partitioned into non-overlapping patches. Then, each patch is represented by a vector of coefficients that can linearly reconstruct the patch from a learned dictionary of basis patches. We propose to measure the spatiotemporal saliency of patches using Shannons self-information entropy, where a patchs saliency is determined by information variation in the contents of the patchs spatiotemporal neighborhood. Experimental results on two benchmark datasets demonstrate the effectiveness of our proposed method.


international conference on multimedia and expo | 2007

Moving Object Tracking from Videos Based on Enhanced Space-Time-Range Mean Shift and Motion Consistency

Tiesheng Wang; Irene Yu-Hua Gu; Andrew G. Backhouse; Pengfei Shi

Video surveillance and object tracking have drawn increased interests in recent years. This paper addresses the problem of moving object tracking from image sequences captured from stationary cameras. Based on the previous work on video segmentation using joint space-time-range mean shift, we extend the scheme to enable the tracking of moving objects. Large displacements of pdf modes in consecutive image frames are exploited for tracking. We also improve the above mean shift-based video segmentation by introducing edge-guided merging of over-segmented regions. This can be viewed as an extension of the enhanced mean shift 2D image segmentation in [2] to the enhanced space-time-range mean shift video segmentation. Experiments have been conducted on several indoor and outdoor videos. Our preliminary results and performance evaluation have indicated the effectiveness of the proposed scheme.


international conference on image processing | 2015

NSLIC: SLIC superpixels based on nonstationarity measure

Shaoyong Jia; Shijie Geng; Yun Gu; Jie Yang; Pengfei Shi; Yu Qiao

Superpixels become more and more popular as image preprocessing step in computer vision applications. In this paper, we propose an improved simple linear iterative clustering (SLIC) superpixel approach based on nonstationarity measure (NS-M), which is called nSLIC. An adjustive distance measure is developed in the five-dimensional [labxy] space. The nSLIC superpixel replaces the predefined fixed value of compactness parameter by the nonstationarity measure map of each image, which exploits the image information and is therefore adaptive to the color feature of the image. It also avoids the difficulty of pre-setting compactness parameter and reduces the parameters needed setting to only one indeed. The nSLIC superpixel improves not only segmentation quality bust also computational efficiency by the way of achieving faster convergence. Experiments done on BSD500 dataset show that nSLIC adheres better to image edges meanwhile producing regular and compact superpixels as much as possible, compared to various popular versions of SLIC.


international conference on image processing | 2015

Reranking of person re-identification by manifold-based approach

Shuai Huang; Yun Gu; Jie Yang; Pengfei Shi

Person re-identification (RE-ID) aims at associating the same pedestrian over non-overlapping surveillance scenes. A large number of approaches have emerged in recent years, and they mainly focus on designing middle or high level features to highlight the most discriminative aspects of pedestrians. Due to the nonrigid structure of pedestrians, it is difficult to re-identify pedestrians by low-level features. We investigate the results of conventional person RE-ID approaches, and find that the inadequate utilization of low-level features lead to the poor performance. In this work, we propose a novel framework to utilize the low-level visual features in a more effective way. Given a result obtained from the conventional person RE-ID method, the framework returns a more reasonable result. The framework is extended from the manifold ranking method, and several adjustments are made taking the requirements of person RE-ID into consideration. Our framework is validated through experiments on two person RE-ID datasets (VIPeR and ETHZ), and results from four different conventional approaches show significant improvement.

Collaboration


Dive into the Pengfei Shi's collaboration.

Top Co-Authors

Avatar

Jie Yang

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Irene Yu-Hua Gu

Chalmers University of Technology

View shared research outputs
Top Co-Authors

Avatar

Tiesheng Wang

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Yun Gu

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Keren Fu

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Zhifei Xu

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Andrew G. Backhouse

Chalmers University of Technology

View shared research outputs
Top Co-Authors

Avatar

Chen Gong

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Chenjie Ge

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Haoyang Xue

Shanghai Jiao Tong University

View shared research outputs
Researchain Logo
Decentralizing Knowledge