Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dekun Zou is active.

Publication


Featured researches published by Dekun Zou.


Proceedings of SPIE | 2009

H.264/AVC substitution watermarking: a CAVLC example

Dekun Zou; Jeffrey Adam Bloom

This work addresses the watermarking of an entropy coded H.264/AVC video stream. The phrase Substitution Watermarking is used to imply that the application of the watermark to the stream is accomplished by substituting an original block of bits in the entropy-encoded stream with an alternative block of bits. This substitution is done for many different blocks of bits to embed the watermark. This can be a particularly powerful technique for applications in which the embedder must be very simple (substitution is a very light operation) and a computationally complex, pre-embedding analysis is practical. The pre-embedding analysis can generate a substitution table and the embedder can simply select entries from the table based on the payload. This paper presents the framework along with an example for H.264/AVC streams that use CAVLC for entropy coding. A separate paper addresses the CABAC entropy coding case.


international symposium on multimedia | 2011

Support Vector Regression Based Video Quality Prediction

Beibei Wang; Dekun Zou; Ran Ding

To measure the quality of experience (QoE) of a video, the current approaches of objective quality metrics development focus on how to design a video quality model, which considers the effects of the extracted features and models the Human Visual System (HVS). However, video quality metrics which try to model the HVS confronts a fact that HVS is too complicated and not well understood to model. In this paper, instead of modeling the objective quality metrics with some functions, we proposed to build a video quality metrics using the support vector machines (SVMs) supervised learning. With the proposed SVM based video quality prediction, it allows a much better approximation to the NTIA-VQM and MOS values, compared to the previous G.1070-based video quality prediction. We further investigated how to choose the certain features which can be efficiently used as SVM input variables.


international conference on multimedia and expo | 2009

Compressed video stream watermarking for peer-to-peer based content distribution network

Dekun Zou; Nicolas Prigent; Jeffrey A. Bloom

Peer-to-peer content distribution provides high network throughput with relatively low server cost and scales better than traditional content distribution networks with respect to the number of clients. It is ideal for applications requiring large data transfer such as Video-on-Demand or live video streaming. Traditional forensic watermarking systems are based on the assumption that each user receives a unique copy of video content to allow for tracking of pirated content back to the original recipient. However, P2P-based distribution systems are typically designed to supply identical copies to each user. This paper proposes a new watermarking framework that is suitable for P2P-based content delivery systems. Watermarking algorithms that are appropriate for this system are also discussed.


international conference on multimedia and expo | 2011

Extending G.1070 for video quality monitoring

Niranjan D. Narvekar; Tao Liu; Dekun Zou; Jeffrey A. Bloom

In 2007, the International Telecommunication Union (ITU) standardized a multimedia quality assessment model as ITU-T Recommendation G.1070. The video quality estimation model proposed in this document uses the encoded bit rate and frame rate of the compressed video, along with the expected packet loss rate of the channel, to predict the subjective video quality. The model was designed as a video quality planning tool and requires prior knowledge of or assumptions about the video and channel parameters. This paper considers the use of the G.1070 video quality estimation model for monitoring applications, where the input parameters themselves must be estimated from the observed bitstream. We address the issues that arise when estimating these video and channel parameters from a real-time video stream.


EURASIP Journal on Advances in Signal Processing | 2011

Real-time video quality monitoring

Tao Liu; Niranjan D. Narvekar; Beibei Wang; Ran Ding; Dekun Zou; Glenn L. Cash; Sitaram Bhagavathy; Jeffrey A. Bloom

The ITU-T Recommendation G.1070 is a standardized opinion model for video telephony applications that uses video bitrate, frame rate, and packet-loss rate to measure the video quality. However, this model was original designed as an offline quality planning tool. It cannot be directly used for quality monitoring since the above three input parameters are not readily available within a network or at the decoder. And there is a great room for the performance improvement of this quality metric. In this article, we present a real-time video quality monitoring solution based on this Recommendation. We first propose a scheme to efficiently estimate the three parameters from video bitstreams, so that it can be used as a real-time video quality monitoring tool. Furthermore, an enhanced algorithm based on the G.1070 model that provides more accurate quality prediction is proposed. Finally, to use this metric in real-world applications, we present an example emerging application of real-time quality measurement to the management of transmitted videos, especially those delivered to mobile devices.


quality of multimedia experience | 2011

Efficient frame complexity estimation and application to G.1070 vide quality monitoring

Beibei Wang; Dekun Zou; Ran Ding; Tao Liu; Sitaram Bhagavathy; Niranjan D. Narvekar; Jeffrey A. Bloom

ITU has standardized a computational model as Recommendation G.1070 for Quality of Experience (QoE) planning [1]. In our previous work, we proposed a system for calculating the G.1070 visual quality estimate in a monitoring scenario [2]. In G.1070, the visual quality is based, in part, on frame rate, bitrate, and packet-loss rate. For a fixed frame rate and a fixed packet-loss rate, the G.1070 visual quality score will decrease with decreases in bitrate. However, G.1070 cannot distinguish between cases in which a decrease in bitrate truly does represent a decrease in quality and cases in which the underlying content is easy to encode, thus resulting in a lower bitrate without a corresponding decrease in quality. In this paper, we propose a modification to G.1070 model to account for this difference by including an analysis of the underlying complexity of the video content. More specifically, we propose a quality measure in which the bitrate input to G.1070 is replaced with a normalized bitrate, where the normalization is based on an estimate of the complexity of the compressed content. With this proposed enhancement to the model (named as G.1070E), it allows a much better approximation to MOS values and the NTIA-VQM[3].


international workshop on digital watermarking | 2006

Data hiding in film grain

Dekun Zou; Jun Tian; Jeffrey A. Bloom; Jiefu Zhai

This paper presents a data hiding technique based on a new compression enhancement called Film Grain Technology. Film grain is a mid-frequency noise-like pattern naturally appearing in imagery captured on film. The Film Grain Technology is a method for modeling and removing the film grain, thus enhancing the compression efficiency, and then using the model parameters to create synthetic film grain at the decoder. We propose slight modifications to the decoder that enable the synthetic film grain to represent metadata available at the decoder. We examine a number of implementation approaches and report results of fidelity and robustness experiments.


international conference on acoustics, speech, and signal processing | 2012

Video content identification using the Viterbi algorithm

Sitaram Bhagavathy; Wen Chen; Dekun Zou; Jeffrey A. Bloom

We propose a video content identification that uses the Viterbi algorithm for identifying unknown video content from a database of reference content. Both the source video in the database and exact frame index therein of each query frame are determined by our method. The Viterbi algorithm is effective for enforcing temporal consistency over frame-level hypotheses based on matching frame fingerprints. We have introduced an “unknown” state in the Viterbi state model to handle both missed detections and non-existent matches. Another novelty of our approach is the use of frame time stamps for computing transition probabilities between frames. This makes our method capable of handling even extreme frame rate changes between query and reference videos. We have provided experimental results demonstrating the effectiveness of the proposed method. This approach may conceivably be used for audio content identification as well.


Archive | 2007

Modifying a coded bitstream

Dekun Zou; Jeffrey Adam Bloom; Peng Yin; Oscar Divorra Escoda


Archive | 2011

VIDEO QUALITY MONITORING

Beibei Wang; Dekun Zou; Ran Ding; Tao Liu; Sitaram Bhagavathy; Niranjan D. Narvekar; Jeffrey A. Bloom; Glenn L. Cash

Collaboration


Dive into the Dekun Zou's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shan He

Princeton University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jun Tian

Princeton University

View shared research outputs
Top Co-Authors

Avatar

Peng Yin

Princeton University

View shared research outputs
Researchain Logo
Decentralizing Knowledge