Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shengping Zhang is active.

Publication


Featured researches published by Shengping Zhang.


international conference on image processing | 2008

Dynamic background modeling and subtraction using spatio-temporal local binary patterns

Shengping Zhang; Hongxun Yao; Shaohui Liu

Traditional background modeling and subtraction methods have a strong assumption that the scenes are of static structures with limited perturbation. These methods will perform poorly in dynamic scenes. In this paper, we present a solution to this problem. We first extend the local binary patterns from spatial domain to spatio-temporal domain, and present a new online dynamic texture extraction operator, named spatio- temporal local binary patterns (STLBP). Then we present a novel and effective method for dynamic background modeling and subtraction using STLBP. In the proposed method, each pixel is modeled as a group of STLBP dynamic texture histograms which combine spatial texture and temporal motion information together. Compared with traditional methods, experimental results show that the proposed method adapts quickly to the changes of the dynamic background. It achieves accurate detection of moving objects and suppresses most of the false detections for dynamic changes of nature scenes.


International Journal of Pattern Recognition and Artificial Intelligence | 2009

Dynamic Background Subtraction Based on Local Dependency Histogram

Shengping Zhang; Hongxun Yao; Shaohui Liu

Traditional background subtraction methods perform poorly when scenes contain dynamic backgrounds such as waving tree, spouting fountain, illumination changes, camera jitters, etc. In this paper, a novel and effective dynamic background subtraction method is presented with three contributions. First, we present a novel local dependency descriptor, called local dependency histogram (LDH), to effectively model the spatial dependencies between a pixel and its neighboring pixels. The spatial dependencies contain substantial evidence for dynamic background subtraction. Second, based on the proposed LDH, an effective approach to dynamic background subtraction is proposed, in which each pixel is modeled as a group of weighted LDHs. Labeling the pixel as foreground or background is done by comparing the new LDH computed in current frame against its model LDHs. The model LDHs are adaptively updated by the new LDH. Finally, unlike traditional approaches which use a fixed threshold to define whether a pixel matches to its model, an adaptive thresholding technique is also proposed. Experimental results on a diverse set of dynamic scenes validate that the proposed method significantly outperforms traditional methods for dynamic background subtraction.


international conference on pattern recognition | 2008

A covariance-based method for dynamic background subtraction

Shengping Zhang; Hongxun Yao; Shaohui Liu; Xilin Chen; Wen Gao

Background subtraction in dynamic scenes is an important and challenging task. In this paper, we present a novel and effective method for dynamic background subtraction based on covariance matrix descriptor. The algorithm integrates two distinct levels: pixel level and region level. At the pixel level, spatial properties that are obtained from pixel coordinate values, and appearance properties, i.e., intensity, texture, gradient, etc, are used as features of each pixel. In the region level, the correlation of features extracted at the pixel level is represented by a covariance matrix that is calculated over a rectangle region around the pixel. Each pixel is modeled as a group of weighted adaptive covariance matrices. Experimental results on a diverse set of dynamic scenes show that the proposed method dramatically out-performs traditional methods for dynamic background subtraction.


international conference on multimedia and expo | 2009

Spatial-temporal nonparametric background subtraction in dynamic scenes

Shengping Zhang; Hongxun Yao; Shaohui Liu

Traditional background subtraction methods model only temporal variation of each pixel. However, there is also spatial variation in real word due to dynamic background such as waving trees, spouting fountain and camera jitters, which causes the significant performance degradation of traditional methods. In this paper, a novel spatial-temporal nonparametric background subtraction approach (STNBS) is proposed to effectively handle dynamic background by modeling the spatial and temporal variations simultaneously. Specially, for each pixel in an image, we adaptively maintain a sample consisting of pixels observed in previous frames. At current frame, for a particular pixel, the proposed method estimates the probabilities of observing this pixel based on samples of its neighboring pixels. The pixel is labeled as background if one of these estimated probabilities is larger than a fixed threshold. All samples are adaptively updated over time. Experimental results on several challenging sequences show that the proposed method achieves the best performance than two state-of-the-art algorithms.


international conference on acoustics, speech, and signal processing | 2010

Robust visual tracking using feature-based visual attention

Shengping Zhang; Hongxun Yao; Shaohui Liu

Psychophysical findings have shown that human vision system has an ability to improve target search by enhancing the representation of image components that are related to the searched target, which is the so-called feature-based visual attention. In this paper, motivated by these psychophysical findings, we propose a robust visual tracking algorithm by simulating such feature-based visual attention. Specially, we consider the general sparse basis functions extracted on a large set of natural image patches as features. We define that a feature is related to the target when succeeding activations of that feature cannot increase systems entropy. The target is finally represented by the probability distribution of those related features. The target search is performed by minimizing the Matusita distance measure between the distributions of the target model and candidate using Newton-style iterations. The experimental results verify that the proposed method is more robust and effective than widely used mean shift based methods.


visual communications and image processing | 2010

Partial occlusion robust object tracking using an effective appearance model

Shengping Zhang; Hongxun Yao; Shaohui Liu

Partial occlusion is one of the most challenging difficulties for object tracking. In this paper, we present an approach to address this problem by using an effective appearance model which has two innovations. First, in contrast to widely used color histogram that models the appearance of an object using only color information, we assert that both color and texture are important cues for tracking, especially in the presence of complex background. We thus propose a novel local descriptor, named local color texture pattern (LCTP), to model the appearance of the object with color and texture information simultaneously. Second, global color histogram completely ignores the spatial layout information of an object and are sensitive to partial occlusion. In this work, we overcome this limitation based on a block-dividing way: 1) divide target into multiple blocks and then represent each block with LCTP histogram, 2) with a selectivity strategy, we select blocks that are not occluded and then combine similarities of those selected blocks to obtain final similarity measure. Experimental results demonstrate that the proposed method is more robust to partial occlusion than two state-of-the-art algorithms.


international conference on multimedia and expo | 2017

Deep networks for compressed image sensing

Wuzhen Shi; Feng Jiang; Shengping Zhang; Debin Zhao

The compressed sensing (CS) theory has been successfully applied to image compression in the past few years as most image signals are sparse in a certain domain. Several CS reconstruction models have been recently proposed and obtained superior performance. However, there still exist two important challenges within the CS theory. The first one is how to design a sampling mechanism to achieve an optimal sampling efficiency, and the second one is how to perform the reconstruction to get the highest quality to achieve an optimal signal recovery. In this paper, we try to deal with these two problems with a deep network. First of all, we train a sampling matrix via the network training instead of using a traditional manually designed one, which is much appropriate for our deep network based reconstruct process. Then, we propose a deep network to recover the image, which imitates traditional compressed sensing reconstruction processes. Experimental results demonstrate that our deep networks based CS reconstruction method offers a very significant quality improvement compared against state-of-the-art ones.


data compression conference | 2017

Convolutional Neural Networks Based Intra Prediction for HEVC

Wenxue Cui; Tao Zhang; Shengping Zhang; Feng Jiang; Wangmeng Zuo; Zhaolin Wan; Debin Zhao

Traditional intra prediction methods for HEVC rely on using the nearest reference lines for predicting a block, which ignore much richer context between the current block and its neighboring blocks and therefore cause inaccurate prediction especially when weak spatial correlation exists between the current block and the reference lines. To overcome this problem, in this paper, an intra-prediction convolutional neural network (IPCNN) is proposed for intra prediction, which exploits the rich context of the current block and therefore is capable of improving the accuracy of predicting the current block. Meanwhile, the reconstruction of the three nearest blocks can also be refined. To the best of our knowledge, this is the first paper that directly applies CNNs to intra prediction for HEVC. Experimental results validate the effectiveness of applying CNNs to intra prediction and the proposed method can achieve 0.70% bitrate reduction compared to HEVC reference software HM-14.0.


data compression conference | 2017

An End-to-End Compression Framework Based on Convolutional Neural Networks

Wen Tao; Feng Jiang; Shengping Zhang; Jie Ren; Wuzhen Shi; Wangmeng Zuo; Xun Guo; Debin Zhao

Traditional image coding standards (such as JPEG and JPEG2000) make the decoded image suffer from many blocking artifacts or noises since the use of big quantization steps. To overcome this problem, we proposed an end-to-end compression framework based on two CNNs, as shown in Figure 1, which produce a compact representation for encoding using a third party coding standard and reconstruct the decoded image, respectively. To make two CNNs effectively collaborate, we develop a unified end-to-end learning framework to simultaneously learn CrCNN and ReCNN such that the compact representation obtained by CrCNN preserves the structural information of the image, which facilitates to accurately reconstruct the decoded image using ReCNN and also makes the proposed compression framework compatible with existing image coding standards.


acm multimedia | 2018

An Efficient Deep Quantized Compressed Sensing Coding Framework of Natural Images

Wenxue Cui; Feng Jiang; Xinwei Gao; Shengping Zhang; Debin Zhao

Traditional image compressed sensing (CS) coding frameworks solve an inverse problem that is based on the measurement coding tools (prediction, quantization, entropy coding, etc.) and the optimization based image reconstruction method. These CS coding frameworks face the challenges of improving the coding efficiency at the encoder, while also suffering from high computational complexity at the decoder. In this paper, we move forward a step and propose a novel deep network based CS coding framework of natural images, which consists of three sub-networks: sampling sub-network, offset sub-network and reconstruction sub-network that responsible for sampling, quantization and reconstruction, respectively. By cooperatively utilizing these sub-networks, it can be trained in the form of an end-to-end metric with a proposed rate-distortion optimization loss function. The proposed framework not only improves the coding performance, but also reduces the computational cost of the image reconstruction dramatically. Experimental results on benchmark datasets demonstrate that the proposed method is capable of achieving superior rate-distortion performance against state-of-the-art methods.

Collaboration


Dive into the Shengping Zhang's collaboration.

Top Co-Authors

Avatar

Hongxun Yao

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Shaohui Liu

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Debin Zhao

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Feng Jiang

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Wenxue Cui

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Wangmeng Zuo

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wuzhen Shi

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jie Ren

Harbin Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge