Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where William E. Lynch is active.

Publication


Featured researches published by William E. Lynch.


international symposium on circuits and systems | 1999

Joint transcoding of multiple MPEG video bitstreams

Hani Sorial; William E. Lynch; André Vincent

This paper addresses the problem of bit-rate conversion of a previously compressed video. We provide an MPEG joint transcoder for transcoding several video bitstreams simultaneously. We show that joint transcoding reduces the quality variation between multiple video sequences, as compared to independently transcoding each sequence at a fixed bit rate. Hence, joint transcoding results in a better utilization of the channel capacity. The joint transcoder can be used in a congested communication network as an alternative to data/packet dropping, and in applications which require multiplexing video signals onto a fixed communication channel such as video servers providing video on demand (VOD) service.


international conference on multimedia and expo | 2000

Selective requantization for transcoding of MPEG compressed video

Hani Sorial; William E. Lynch; André Vincent

The paper addresses the problem of bit-rate conversion of MPEG compressed video. We present selective requantization, a method for reducing the requantization errors in transcoding. The proposed method is based on avoiding critical ratios of the two cascaded quantizations (encoding versus transcoding) that either lead to larger transcoding errors or require a higher bit budget. Results show that selective requantization improves the quality of the transcoded images. The presented method is simple to implement and does not require side information.


international conference on acoustics, speech, and signal processing | 1995

Post processing transform coded images using edges

William E. Lynch; Amy R. Reibman; Bede Liu

Discrete cosine transform coding, a popular image compression strategy, results in two visible artifacts: blocking and ringing. These are both high frequency artifacts. Since images contain high frequency information the artifacts are removed using a space varying low pass filter as a post processor. Low frequency blocks and flat regions of blocks containing a strong edge are filtered. Low frequency blocks are identified in the transform coefficient domain; edge blocks are identified in the spatial domain. This does not require any alterations in the compressed bit stream. Improvement is demonstrated both subjectively and objectively.


international conference on image processing | 1994

Edge compensated transform coding

William E. Lynch; Amy R. Reibman; Bede Liu

Transform based image compression has difficulty with image regions containing edges. Edge compensated transform coding (ECTC) addresses this problem by preprocessing to remove edges. This preprocessing is adapted to transform coding. The edge information is sent in a side channel and the edges are replaced at the receiver. Subjective improvement is demonstrated.<<ETX>>


Signal Processing-image Communication | 2010

Iterative joint source-channel decoding of H.264 compressed video

David Levine; William E. Lynch; Tho Le-Ngoc

This paper proposes an Iterative Joint Source-Channel Decoding (IJSCD) scheme for error resilient transmission of H.264 compressed video over noisy channels by using the available H.264 compression, e.g., Context-based Adaptive Binary Arithmetic Coding (CABAC), and channel coding, i.e., rate-1/2 Recursive Systematic Convolutional (RSC) code, in transmission. At the receiver, the turbo decoding concept is explored to develop a joint source-channel decoding structure using a soft-in soft-out channel decoder in conjunction with the source decoding functions, e.g., CABAC-based H.264 semantic verification, in an iterative manner. Illustrative designs of the proposed IJSCD scheme for an Additive White Gaussian Noise (AWGN) channel, including the derivations of key parameters for soft information are discussed. The performance of the proposed IJSCD scheme is shown for several video sequences. In the examples, for the same desired Peak Signal-to-Noise Ratio (PSNR), the proposed IJSCD scheme offers a savings of up to 2.1dB in required channel Signal-to-Noise Ratio (SNR) as compared to a system using the same RSC code alone. The complexity of the proposed scheme is also evaluated. As the number of iterations is controllable, a tradeoff can be made between performance improvement and the overall complexity.


international conference on acoustics, speech, and signal processing | 2004

Iterative joint source-channel decoding using turbo codes for MPEG-4 video transmission

Xiaofeng Ma; William E. Lynch

This paper presents a novel iterative joint source-channel decoding scheme for MPEG-4 video transmission over noisy channels. The proposed scheme, on one hand, utilizes the channel soft outputs generated by the turbo decoder to assist video decompression. On the other hand, the syntactic/semantic information from the video decompressor is used to modify the extrinsic information so as to improve the error correction capability of the turbo decoder. With the proposed video packet mixer, the scheme can correct most turbo coding blocks with a large number of errors. Simulation results show significant improvement in terms of PSNR, reconstructed video quality, as well as BER over turbo decoding only.


international conference on information technology coding and computing | 2002

Joint forward error correction and error concealment for compressed video

Yan Mei; William E. Lynch; Tho Le-Ngoc

In this paper we propose a joint source/channel decoding scheme. This work combines information from classical Forward Error Correction (FEC) with information about the syntax and semantics of the received video along with the resulting videos continuity to give improved performance over classical FEC. One feature of this work is the need to find a slice evaluation measure which balances the likelihood of a slice from the point of view of the channel decoder with the quality of the decompressed slice.


international conference on image processing | 2000

Estimating Laplacian parameters of DCT coefficients for requantization in the transcoding of MPEG-2 video

Hani Sorial; William E. Lynch; André Vincent

This paper addresses the bit-rate reduction of MPEG-2 compressed video and presents a method to reduce requantization errors in transcoding. The proposed method assumes Laplacian distributions for the original AC coefficients of the DCT. A Laplacian parameter for each coefficient is estimated at the transcoder from the quantized input DCT coefficients. These parameters are then used in requantization to improve the quality of the transcoded video. The algorithm provided in this paper to estimate the Laplacian parameters of the original DCT coefficients is simple to implement and may be adapted to other DCT-based coding schemes.


international symposium on circuits and systems | 2007

Iterative Joint Source-Channel Decoding of H.264 Compressed Video

David Levine; William E. Lynch; Tho Le-Ngoc

This paper proposes an iterative joint source-channel decoding (IJSCD) scheme for the transmission of H.264 compressed video over a noisy channel. It uses channel coding along with H.264 semantic verification. The structure, selection of design parameters, and performance of the proposed IJSCD based on a rate-% recursive systematic convolutional (RSC) code over an AWGN channel are described and discussed as an illustrative example. In the example, for the same PSNR, the proposed IJSCD scheme offers a significant saving of 2.1dB in required channel SNR as compared to a system using the same RSC code alone. Furthermore, the performance can be improved by iterative decoding at the cost of increased delay. Hence, a tradeoff can be made between performance improvement and delay


Signal Processing-image Communication | 2006

MPEG-4 constant-quality constant-bit-rate control algorithms

Cheng-Yu Pai; William E. Lynch

Most constant bit-rate control algorithms aim to produce a bitstream that meets a certain bit-rate with the highest quality. Due to the non-stationary nature of the video sequence, the quality of the compressed sequence changes over time which is not desirable to end-users. In this paper, two constant-quality CBR control algorithms are proposed. They aim to produce CBR bitstream with the highest quality while having low variation in quality. Rather than controlling the quality indirectly as done in previously reported constant-quality control algorithms, they control the quality directly by using quality-matching algorithms. The frame-level Laplacian Constant Quality (FLCQ) algorithm allows one Quantization Parameter (QP) per frame, and uses a new rate-distortion model based on modeling DCT coefficients as having two-sided Laplacian distributions. The macroblock-level Viterbi Constant Quality (MVCQ) algorithm permits the QP to be changed for each macroblock using the Viterbi algorithm which reduces the search complexity. For both algorithms, besides the target bitrate, an extra degree of freedom is introduced that allows trading the variation in quality with the accuracy to the target bitrate. Simulation results show that the proposed algorithms outperform Q2 and TM5 by offering similar or higher PSNR while having lower PSNR variance.

Collaboration


Dive into the William E. Lynch's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bede Liu

Princeton University

View shared research outputs
Researchain Logo
Decentralizing Knowledge