Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shuai Wan is active.

Publication


Featured researches published by Shuai Wan.


IEEE Transactions on Circuits and Systems for Video Technology | 2010

No-Reference Quality Assessment for Networked Video via Primary Analysis of Bit Stream

Fuzheng Yang; Shuai Wan; Qingpeng Xie; Hong Ren Wu

A no-reference (NR) quality measure for networked video is introduced using information extracted from the compressed bit stream without resorting to complete video decoding. This NR video quality assessment measure accounts for three key factors which affect the overall perceived picture quality of networked video, namely, picture distortion caused by quantization, quality degradation due to packet loss and error propagation, and temporal effects of the human visual system. First, the picture quality in the spatial domain is measured, for each frame, relative to quantization under an error-free transmission condition. Second, picture quality is evaluated with respect to packet loss and the subsequent error propagation. The video frame quality in the spatial domain is, therefore, jointly determined by coding distortion and packet loss. Third, a pooling scheme is devised as the last step of the proposed quality measure to capture the perceived quality degradation in the temporal domain. The results obtained by performance evaluations using MPEG-4 coded video streams have demonstrated the effectiveness of the proposed NR video quality metric.


Pattern Recognition | 2015

Video summarization via minimum sparse reconstruction

Shaohui Mei; Genliang Guan; Zhiyong Wang; Shuai Wan; Mingyi He; David Dagan Feng

The rapid growth of video data demands both effective and efficient video summarization methods so that users are empowered to quickly browse and comprehend a large amount of video content. In this paper, we formulate the video summarization task with a novel minimum sparse reconstruction (MSR) problem. That is, the original video sequence can be best reconstructed with as few selected keyframes as possible. Different from the recently proposed convex relaxation based sparse dictionary selection method, our proposed method utilizes the true sparse constraint L0 norm, instead of the relaxed constraint L 2 , 1 norm, such that keyframes are directly selected as a sparse dictionary that can well reconstruct all the video frames. An on-line version is further developed owing to the real-time efficiency of the proposed MSR principle. In addition, a percentage of reconstruction (POR) criterion is proposed to intuitively guide users in obtaining a summary with an appropriate length. Experimental results on two benchmark datasets with various types of videos demonstrate that the proposed methods outperform the state of the art. HighlightsA minimum sparse reconstruction (MSR) based video summarization (VS) model is constructed.An L0 norm based constraint is imposed to ensure real sparsity.Two efficient and effective MSR based VS algorithms are proposed for off-line and on-line applications, respectively.A scalable strategy is designed to provide flexibility for practical applications.


IEEE Signal Processing Letters | 2005

A novel objective no-reference metric for digital video quality assessment

Fuzheng Yang; Shuai Wan; Yilin Chang; Hong Ren Wu

A novel objective no-reference metric is proposed for video quality assessment of digitally coded videos containing natural scenes. Taking account of the temporal dependency between adjacent images of the videos and characteristics of the human visual system, the spatial distortion of an image is predicted using the differences between the corresponding translational regions of high spatial complexity in two adjacent images, which are weighted according to temporal activities of the video. The overall video quality is measured by pooling the spatial distortions of all images in the video. Experiments using reconstructed video sequences indicate that the objective scores obtained by the proposed metric agree well with the subjective assessment scores.


IEEE Transactions on Image Processing | 2007

Rate-Distortion Optimized Motion-Compensated Prediction for Packet Loss Resilient Video Coding

Shuai Wan; Ebroul Izquierdo

A rate-distortion optimized motion-compensated prediction method for robust video coding is proposed. Contrasting methods from the conventional literature, the proposed approach uses the expected reconstructed distortion after transmission, instead of the displaced frame difference in motion estimation. Initially, the end-to-end reconstructed distortion is estimated through a recursive per-pixel estimation algorithm. Then the total bit rate for motion-compensated encoding is predicted using a suitable rate distortion model. The results are fed into the Lagrangian optimization at the encoder to perform motion estimation. Here, the encoder automatically finds an optimized motion compensated prediction by estimating the best tradeoff between coding efficiency and end-to-end distortion. Finally, rate-distortion optimization is applied again to estimate the macroblock mode. This process uses previously selected optimized motion vectors and their corresponding reference frames. It also considers intraprediction. Extensive computer simulations in lossy channel environments were conducted to assess the performance of the proposed method. Selected results for both single and multiple reference frames settings are described. A comparative evaluation using other conventional techniques from the literature was also conducted. Furthermore, the effects of mismatches between the actual channel packet loss rate and the one assumed at the encoder side have been evaluated and reported in this paper


IEEE Communications Magazine | 2012

Bitstream-based quality assessment for networked video: a review

Fuzheng Yang; Shuai Wan

Bitstream-based methods are intuitively suited for quality assessment of networked video services, and are currently under intense investigations in terms of research and standardization activities. This article examines the factors that may affect the quality of the networked video, and reviews the state-of-the-art techniques as well as standardization progress in bitstream-based video quality assessment. According to different levels of access to the bitstream, three types of models are described and compared: the parametric, packet layer, and bitstream-layer models.


Eurasip Journal on Image and Video Processing | 2007

Joint source-channel coding for wavelet-based scalable video transmission using an adaptive turbo code

Naeem Ramzan; Shuai Wan; Ebroul Izquierdo

An efficient approach for joint source and channel coding is presented. The proposed approach exploits the joint optimization of a wavelet-based scalable video coding framework and a forward error correction method based on turbo codes. The scheme minimizes the reconstructed video distortion at the decoder subject to a constraint on the overall transmission bitrate budget. The minimization is achieved by exploiting the source rate distortion characteristics and the statistics of the available codes. Here, the critical problem of estimating the bit error rate probability in error-prone applications is discussed. Aiming at improving the overall performance of the underlying joint source-channel coding, the combination of the packet size, interleaver, and channel coding rate is optimized using Lagrangian optimization. Experimental results show that the proposed approach outperforms conventional forward error correction techniques at all bit error rates. It also significantly improves the performance of end-to-end scalable video transmission at all channel bit rates.


IEEE Journal of Selected Topics in Signal Processing | 2012

Content-Adaptive Packet-Layer Model for Quality Assessment of Networked Video Services

Fuzheng Yang; Jiarun Song; Shuai Wan; Hong Ren Wu

Packet-layer models are designed to use only the information provided by packet headers for real-time and non-intrusive quality monitoring of networked video services. This paper proposes a content-adaptive packet-layer (CAPL) model for networked video quality assessment. Considering the fact that the quality degradation of a networked video significantly relies on the temporal as well as the spatial characteristics of the video content, temporal complexity is incorporated in the proposed model. Due to very limited information directly available from packet headers, a simple and adaptive method for frame type detection is adopted in the CAPL model. The temporal complexity is estimated using the ratio of the number of bits for coding P and I frames. The estimated temporal complexity and frame type are incorporated in the CAPL model together with the information about the number of bits and positions of lost packets to obtain the quality estimate for each frame, by evaluating the distortions induced by both compression and packet loss. A two-level temporal pooling is employed to obtain the video quality given the frame quality. Using content related information, the proposed model is able to adapt to different video contents. Experimental results show that the CAPL model significantly outperforms the G.1070 model and the DT model in terms of widely used performance criteria, including the Root-Mean-Squared Error (RMSE), the Pearson Correlation Coefficient (PCC), the Spearman Rank Order Correlation Coefficient (SCC), and the Outlier Ratio (OR).


Signal Processing-image Communication | 2009

Frame layer rate control for H.264/AVC with hierarchical B-frames

Ming Li; Yilin Chang; Fuzheng Yang; Shuai Wan; Sixin Lin; Lianhuan Xiong

Hierarchical B-frames contribute to improvement of coding performance when introduced into H.264/AVC. However, the existing rate control schemes for H.264/AVC, which are mainly applied to IPPP and IBBP coding structures, cannot work efficiently for the coding structures with hierarchical B-frames. In this paper, a frame layer rate control scheme for hierarchical B-frames is proposed. Firstly, an adaptive starting quantization parameter (QP) determination method is implemented to derive the QP for the first coding frame based on the available channel bit rate and the content of the current video sequence. Then, the target bit budget for a group of pictures (GOP) is calculated based on the target bit rate and the buffer status. Afterwards, a temporal level (TL) layer rate control phase is introduced, and the GOP layer target bit budget is allocated to each TL. In the frame layer rate control phase, a method based on a rate-distortion model and the coding properties of the previous coded key frames is derived to determine the QP for the current key frame. For hierarchical B-frames, we introduce a typical weighting factor in the determination of their target bit budgets to address the features of the hierarchical coding structures. This weighting factor is calculated according to the target bit budget of the GOP layer and the knowledge obtained from the previous coded B-frames in each TL. Subsequently, the QP for coding the current B-frame is computed by a quadratic model with different model parameters for different TLs, and the computed QP is further adaptively adjusted according to the usage of the target bit budgets. After coding the current frame, an update stage, in which a threshold-based method is integrated to avoid model degradation, is invoked to update the parameters for rate control. Experimental results demonstrate that when the proposed rate control scheme is applied to the coding structure with hierarchical B-frames in H.264/AVC, the actual coding bit rates can match the target bit rates very well, and the encoding performance is also improved.


Optical Engineering | 2005

Gradient-threshold edge detection based on the human visual system

Fuzheng Yang; Yilin Chang; Shuai Wan

We present an improved method that is suitable for gradient-threshold edge detectors. The method takes into account the basic characteristics of the human visual system and masks the gradient image with the luminance and the activity of local image before edge labeling. An implementation of this method on a Canny detector is described as an example. The results show that the edge images obtained by our algorithm are more consistent with the perceptive edge images.


IEEE Transactions on Multimedia | 2016

QoE Evaluation of Multimedia Services Based on Audiovisual Quality and User Interest

Jiarun Song; Fuzheng Yang; Yicong Zhou; Shuai Wan; Hong Ren Wu

Quality of experience (QoE) has significant influence on whether or not a user will choose a service or product in the competitive era. For multimedia services, there are various factors in a communication ecosystem working together on users, which stimulate their different senses inducing multidimensional perceptions of the services, and inevitably increase the difficulty in measurement and estimation of the users QoE. In this paper, a user-centric objective QoE evaluation model (QAVIC model for short) is proposed to estimate the users overall QoE for audiovisual services, which takes account of perceptual audiovisual quality (QAV) and user interest in audiovisual content (IC) amongst influencing factors on QoE such as technology, content, context, and user in the communication ecosystem. To predict the user interest, a number of general viewing behaviors are considered to formulate the IC evaluation model. Subjective tests have been conducted for training and validation of the QAVIC model. The experimental results show that the proposed QAVIC model can estimate the users QoE reasonably accurately using a 5-point scale absolute category rating scheme.

Collaboration


Dive into the Shuai Wan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kaifang Yang

Northwestern Polytechnical University

View shared research outputs
Top Co-Authors

Avatar

Yanchao Gong

Northwestern Polytechnical University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shaohui Mei

Northwestern Polytechnical University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mingyang Ma

Northwestern Polytechnical University

View shared research outputs
Top Co-Authors

Avatar

Ebroul Izquierdo

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Yan Feng

Northwestern Polytechnical University

View shared research outputs
Researchain Logo
Decentralizing Knowledge