Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where En-hui Yang is active.

Publication


Featured researches published by En-hui Yang.


IEEE Transactions on Information Theory | 2000

Grammar-based codes: a new class of universal lossless source codes

John C. Kieffer; En-hui Yang

We investigate a type of lossless source code called a grammar-based code, which, in response to any input data string x over a fixed finite alphabet, selects a context-free grammar G/sub x/ representing x in the sense that x is the unique string belonging to the language generated by G/sub x/. Lossless compression of x takes place indirectly via compression of the production rules of the grammar G/sub x/. It is shown that, subject to some mild restrictions, a grammar-based code is a universal code with respect to the family of finite-state information sources over the finite alphabet. Redundancy bounds for grammar-based codes are established. Reduction rules for designing grammar-based codes are presented.


IEEE Transactions on Image Processing | 2006

Quality-aware images

Zhou Wang; Guixing Wu; Hamid R. Sheikh; Eero P. Simoncelli; En-hui Yang; Alan C. Bovik

We propose the concept of quality-aware image , in which certain extracted features of the original (high-quality) image are embedded into the image data as invisible hidden messages. When a distorted version of such an image is received, users can decode the hidden messages and use them to provide an objective measure of the quality of the distorted image. To demonstrate the idea, we build a practical quality-aware image encoding, decoding and quality analysis system, which employs: 1) a novel reduced-reference image quality assessment algorithm based on a statistical model of natural images and 2) a previously developed quantization watermarking-based data hiding technique in the wavelet transform domain.


IEEE Transactions on Information Theory | 2000

Efficient universal lossless data compression algorithms based on a greedy sequential grammar transform. I. Without context models

En-hui Yang; John C. Kieffer

A grammar transform is a transformation that converts any data sequence to be compressed into a grammar from which the original data sequence can be fully reconstructed. In a grammar-based code, a data sequence is first converted into a grammar by a grammar transform and then losslessly encoded. In this paper, a greedy grammar transform is first presented; this grammar transform constructs sequentially a sequence of irreducible grammars from which the original data sequence can be recovered incrementally. Based on this grammar transform, three universal lossless data compression algorithms, a sequential algorithm, an improved sequential algorithm, and a hierarchical algorithm, are then developed. These algorithms combine the power of arithmetic coding with that of string matching. It is shown that these algorithms are all universal in the sense that they can achieve asymptotically the entropy rate of any stationary, ergodic source. Moreover, it is proved that their worst case redundancies among all individual sequences of length are upper-bounded by log log log , where is a constant. Simulation results show that the proposed algorithms outperform the Unix Compress and Gzip algorithms, which are based on LZ78 and LZ77, respectively.


IEEE Transactions on Information Theory | 1998

On the performance of data compression algorithms based upon string matching

En-hui Yang; John C. Kieffer

Lossless and lossy data compression algorithms based on string matching are considered. In the lossless case, a result of Wyner and Ziv (1989) is extended. In the lossy case, a data compression algorithm based on approximate string matching is analyzed in the following two frameworks: (1) the database and the source together form a Markov chain of finite order; (2) the database and the source are independent with the database coming from a Markov model and the source from a general stationary, ergodic model. In either framework, it is shown that the resulting compression rate converges with probability one to a quantity computable as the infimum of an information theoretic functional over a set of auxiliary random variables; the quantity is strictly greater than the rate distortion function of the source except in some symmetric cases. In particular, this result implies that the lossy algorithm proposed by Steinberg and Gutman (1993) is not optimal, even for memoryless or Markov sources.


IEEE Transactions on Image Processing | 2007

Rate Distortion Optimization for H.264 Interframe Coding: A General Framework and Algorithms

En-hui Yang; Xiang Yu

Rate distortion (RD) optimization for H.264 interframe coding with complete baseline decoding compatibility is investigated on a frame basis. Using soft decision quantization (SDQ) rather than the standard hard decision quantization, we first establish a general framework in which motion estimation, quantization, and entropy coding (in H.264) for the current frame can be jointly designed to minimize a true RD cost given previously coded reference frames. We then propose three RD optimization algorithms-a graph-based algorithm for near optimal SDQ in H.264 baseline encoding given motion estimation and quantization step sizes, an algorithm for near optimal residual coding in H.264 baseline encoding given motion estimation, and an iterative overall algorithm to optimize H.264 baseline encoding for each individual frame given previously coded reference frames-with them embedded in the indicated order. The graph-based algorithm for near optimal SDQ is the core; given motion estimation and quantization step sizes, it is guaranteed to perform optimal SDQ if the weak adjacent block dependency utilized in the context adaptive variable length coding of H.264 is ignored for optimization. The proposed algorithms have been implemented based on the reference encoder JM82 of H.264 with complete compatibility to the baseline profile. Experiments show that for a set of typical video testing sequences, the graph-based algorithm for near optimal SDQ, the algorithm for near optimal residual coding, and the overall algorithm achieve on average, 6%, 8%, and 12%, respectively, rate reduction at the same PSNR (ranging from 30 to 38 dB) when compared with the RD optimization method implemented in the H.264 reference software.


IEEE Transactions on Information Theory | 2000

Universal lossless compression via multilevel pattern matching

John C. Kieffer; En-hui Yang; Gregory J. Nelson; Pamela C. Cosman

A universal lossless data compression code called the multilevel pattern matching code (MPM code) is introduced. In processing a finite-alphabet data string of length n, the MPM code operates at O(log log n) levels sequentially. At each level, the MPM code detects matching patterns in the input data string (substrings of the data appearing in two or more nonoverlapping positions). The matching patterns detected at each level are of a fixed length which decreases by a constant factor from level to level, until this fixed length becomes one at the final level. The MPM code represents information about the matching patterns at each level as a string of tokens, with each token string encoded by an arithmetic encoder. From the concatenated encoded token strings, the decoder can reconstruct the data string via several rounds of parallel substitutions. A O(1/log n) maximal redundancy/sample upper bound is established for the MPM code with respect to any class of finite state sources of uniformly bounded complexity. We also show that the MPM code is of linear complexity in terms of time and space requirements. The results of some MPM code compression experiments are reported.


IEEE Transactions on Wireless Communications | 2009

Spectrum sensing in cognitive radio using goodness of fit testing

Haiquan Wang; En-hui Yang; Zhijin Zhao; Wei Zhang

One of the most important challenges in cognitive radio is how to measure or sense the existence of a signal transmission in a specific channel, that is, how to conduct spectrum sensing. In this letter, we first formulate spectrum sensing as a goodness of fit testing problem, and then apply the Anderson-Darling test, one of goodness of fit tests, to derive a sensing method called Anderson-Darling sensing. It is shown by both analysis and numerical results that under the same sensing conditions and channel environments, Anderson-Darling sensing has much higher sensitivity to detect an existing signal than energy detector-based sensing, especially in a case where the received signal has a low signal-to-noise ratio (SNR) without prior knowledge of primary user signals.


IEEE Transactions on Information Theory | 1997

Fixed-slope universal lossy data compression

En-hui Yang; Zhen Zhang; Toby Berger

Corresponding to any lossless codeword length function l, three universal lossy data compression schemes are presented: one is with a fixed rate, another is with a fixed distortion, and a third is with a fixed slope. The former two universal lossy data compression schemes are the generalization of Yang-Kieffers (see ibid., vol.42, no.1, p.239-45, 1995) results to the general case of any lossless codeword length function l, whereas the third is new. In the case of fixed-slope /spl lambda/>0, our universal lossy data compression scheme works as follows: for any source sequence x/sup n/ of length n, the encoder first searches for a reproduction sequence y/sup n/ of length n which minimizes a cost function n/sup -1/l(y/sup n/)+/spl lambda//spl rho//sub n/(x/sup n/, y/sup n/) over all reproduction sequences of length n, and then encodes x/sup n/ into the binary codeword of length l(y/sup n/) associated with y/sup n/ via the lossless codeword length function l, where /spl rho//sub n/(x/sup n/, y/sup n/) is the distortion per sample between x/sup n/ and y/sup n/. Under some mild assumptions on the lossless codeword length function l, it is shown that when this fixed-slope data compression scheme is applied to encode a stationary, ergodic source, the resulting encoding rate per sample and the distortion per sample converge with probability one to R/sub /spl lambda// and D/sub /spl lambda//, respectively, where (D/sub /spl lambda//, R/sub /spl lambda//) is the point on the rate distortion curve at which the slope of the rate distortion function is -/spl lambda/. This result holds particularly for the arithmetic codeword length function and Lempel-Ziv codeword length function. The main advantage of this fixed-slope universal lossy data compression scheme over the fixed-rate (fixed-distortion) universal lossy data compression scheme lies in the fact that it converts the encoding problem to a search problem through a trellis and then permits one to use some sequential search algorithms to implement it. Simulation results show that this fixed-slope universal lossy data compression scheme, combined with a suitable search algorithm, is promising.


IEEE Transactions on Information Theory | 1996

Simple universal lossy data compression schemes derived from the Lempel-Ziv algorithm

En-hui Yang; John C. Kieffer

Two universal lossy data compression schemes, one with fixed rate and the other with fixed distortion, are presented, based on the well-known Lempel-Ziv algorithm. In the case of fixed rate R, the universal lossy data compression scheme works as follows: first pick a codebook B/sub n/ consisting of all reproduction sequences of length n whose Lempel-Ziv codeword length is /spl les/nR, and then use B/sub n/ to encode the entire source sequence n-block by n-block. This fixed-rate data compression scheme is universal in the sense that for any stationary, ergodic source or for any individual sequence, the sample distortion performance as n/spl rarr//spl infin/ is given almost surely by the distortion rate function. A similar result is shown in the context of fixed distortion lossy source coding.


wireless communications and networking conference | 2008

A Framework of Cross-Layer Superposition Coded Multicast for Robust IPTV Services over WiMAX

James She; Xiang Yu; Fen Hou; Pin-Han Ho; En-hui Yang

A cross-layer design (CLD) framework for robust and efficient video multicasting over IEEE 802.16 (or WiMAX) is introduced. In the framework, multiple description coding on scalable video bitstreams at the source for achieving multi-resolution robustness is jointly designed with superposition coding (i.e., multi-resolution modulation) on multicast signals at the channel to overcome the channel diversity problem in wireless multicast. The resulting cross-layer coded multicast signals enable us to recover some lost bitstreams in high quality layers, which is not possible if multi-resolution modulation is used alone for multicasting as in previous works. Simulation results show that indeed our joint design outperforms the scheme using only superposition coded multicast by achieving better video quality for users under multi-user channel diversity.

Collaboration


Dive into the En-hui Yang's collaboration.

Top Co-Authors

Avatar

Jin Meng

University of Waterloo

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zhen Zhang

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Wei Sun

University of Waterloo

View shared research outputs
Top Co-Authors

Avatar

Chang Sun

University of Waterloo

View shared research outputs
Top Co-Authors

Avatar

Lin Zheng

University of Waterloo

View shared research outputs
Top Co-Authors

Avatar

Haiquan Wang

Hangzhou Dianzi University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yunwei Jia

University of Waterloo

View shared research outputs
Researchain Logo
Decentralizing Knowledge