Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hui Cheng is active.

Publication


Featured researches published by Hui Cheng.


IEEE Transactions on Image Processing | 2001

Multiscale Bayesian segmentation using a trainable context model

Hui Cheng; Charles A. Bouman

Multiscale Bayesian approaches have attracted increasing attention for use in image segmentation. Generally, these methods tend to offer improved segmentation accuracy with reduced computational burden. Existing Bayesian segmentation methods use simple models of context designed to encourage large uniformly classified regions. Consequently, these context models have a limited ability to capture the complex contextual dependencies that are important in applications such as document segmentation. We propose a multiscale Bayesian segmentation algorithm which can effectively model complex aspects of both local and global contextual behavior. The model uses a Markov chain in scale to model the class labels that form the segmentation, but augments this Markov chain structure by incorporating tree based classifiers to model the transition probabilities between adjacent scales. The tree based classifier models complex transition rules with only a moderate number of parameters. One advantage to our segmentation algorithm is that it can be trained for specific segmentation applications by simply providing examples of images with their corresponding accurate segmentations. This makes the method flexible by allowing both the context and the image models to be adapted without modification of the basic algorithm. We illustrate the value of our approach with examples from document segmentation in which test, picture and background classes must be separated.


electronic imaging | 2003

Robust content-dependent high-fidelity watermark for tracking in digital cinema

Jeffrey Lubin; Jeffrey A. Bloom; Hui Cheng

Forensic digital watermarking is a promising tool in the fight against piracy of copyrighted motion imagery content, but to be effective it must be (1) imperceptibly embedded in high-definition motion picture source, (2) reliably retrieved, even from degraded copies as might result from camcorder capture and subsequent very-low-bitrate compression and distribution on the Internet, and (3) secure against unauthorized removal. No existing watermarking technology has yet to meet these three simultaneous requirements of fidelity, robustness, and security. We describe here a forensic watermarking approach that meets all three requirements. It is based on the inherent robustness and imperceptibility of very low spatiotemporal frequency watermark carriers, and on a watermark placement technique that renders jamming attacks too costly in picture quality, even if the attacker has complete knowledge of the embedding algorithm. The algorithm has been tested on HD Cinemascope source material exhibited in a digital cinema viewing room. The watermark is imperceptible, yet recoverable after exhibition capture with camcorders, and after the introduction of other distortions such as low-pass filtering, noise addition, geometric shifts, and the manipulation of brightness and contrast.


Journal of Electronic Imaging | 2001

Document compression using rate-distortion optimized segmentation

Hui Cheng; Charles A. Bouman

Effective document compression algorithms require that scanned document images be first segmented into regions such as text, pictures, and background. In this paper, we present a multilayer compression algorithm for document images. This compression al- gorithm first segments a scanned document image into different classes, then compresses each class using an algorithm specifically designed for that class. Two algorithms are investigated for seg- menting document images: a direct image segmentation algorithm called the trainable sequential MAP (TSMAP) segmentation algo- rithm, and a rate-distortion optimized segmentation (RDOS) algo- rithm. The RDOS algorithm works in a closed loop fashion by apply- ing each coding method to each region of the document and then selecting the method that yields the best rate-distortion trade-off. Compared with the TSMAP algorithm, the RDOS algorithm can of- ten result in a better rate-distortion trade-off, and produce more ro- bust segmentations by eliminating those misclassifications which can cause severe artifacts. At similar bit rates, the multilayer com- pression algorithm using RDOS can achieve a much higher subjec- tive quality than state-of-the-art compression algorithms, such as DjVu and SPIHT.


international conference on image processing | 1998

Trainable context model for multiscale segmentation

Hui Cheng; Charles A. Bouman

Most previous approaches to Bayesian segmentation have used simple prior models, such as Markov random fields (MRF), to enforce regularity in the segmentation. While these methods improve classification accuracy, they are not well suited to modeling complex contextual structure. In this paper, we propose a context model for multiscale segmentation which can capture very complex behaviors on both local and global scales. Our method works by using binary classification trees to model the transition probabilities between segmentations at adjacent scales. The classification trees can be efficiently trained to model essential aspects of contextual behavior. In addition, the data model in our approach is novel in the sense that it can incorporate the correlation among the wavelet feature vectors across scales. We apply our method to the problem of document segmentation to illustrate its usefulness.


visual communications and image processing | 2006

Correlation estimation and performance optimization for distributed image compression

Zhihai He; Lei Cao; Hui Cheng

Correlation estimation plays a critical role in resource allocation and rate control for distributed data compression. A Wyner-Ziv encoder for distributed image compression is often considered as a lossy source encoder followed by a lossless Slepian-Wolf encoder. The source encoder consists of spatial transform, quantization, and bit plane extraction. In this work, we find that Gray code, which has been extensively used in digital modulation, is able to significantly improve the correlation between the source data and its side information. Theoretically, we analyze the behavior of Gray code within the context of distributed image compression. Using this theoretical model, we are able to efficiently allocate the bit budget and determine the code rate of the Slepian-Wolf encoder. Our experimental results demonstrate that the Gray code, coupled with accurate correlation estimation and rate control, significantly improves the picture quality, by up to 4 dB, over the existing methods for distributed image compression.


international conference on image processing | 1999

Multilayer document compression algorithm

Hui Cheng; Charles A. Bouman

In this paper, we propose a multilayer document compression algorithm. This algorithm first segments a scanned document image into different classes such as text, images and background, then compresses each class using an algorithm specifically designed for that class. Two algorithms are investigated for segmenting documents: a general purpose image segmentation algorithm called the trainable sequential MAP (TSMAP) algorithm, and a rate-distortion optimized segmentation (RDOS) algorithm. Experimental results show that the multilayer compression algorithm can achieve a much lower bit rate than most conventional algorithms such as JPEG at similar subjective distortion levels. We also find that the RDOS method produces more robust segmentations than TSMBP by eliminating misclassifications which can sometimes cause severe artifacts.


human vision and electronic imaging conference | 2005

Reference-free objective quality metrics for MPEG-coded video

Hui Cheng; Jeffrey Lubin

With the growth of digital video delivery, there is an increasing demand for better and more efficient ways to measure video quality. Most existing video quality metrics are reference-based approaches that are not suitable to measure the video quality perceived by the end user without access to reference videos. In this paper, we propose a reference-free video quality metric for MPEG coded videos. It predicts subjective quality ratings using both reference-free MPEG artifact measures and MPEG system parameters (known or estimated). The advantage of this approach is that it does not need a precise separation of content and artifact or the removal of any artifacts. By exploring the correlations among different artifacts and system parameters, our approach can remove content dependency and achieve an accurate estimate of the subjective ratings.


electronic imaging | 2001

Rate-distortion-based segmentation for MRC compression

Hui Cheng; Guotong Feng; Charles A. Bouman

Effective document compression algorithms require scanned document images be first segmented into regions such as text, pictures and background. In this paper, we present a document compression algorithm that is based on the 3-layer (foreground/mask/background)MRC (mixture raster content) model. This compression algorithm first segments a scanned document image into different classes. Then, each class is transformed to the 3-layer MRC model differently according to the property of that class. Finally, the foreground and the back-ground layers are compressed using JPEG with customized quantization tables. The mask layer is compressed using JBIG2. The segmentation is optimized in the sense of rate distortion for the 3-layer MRC representation. It works in a closed loop fashion by a lying each transformation to each region of the document and then selecting the method that yields the best rate-distortion trade-off. The proposed segmentation algorithm can not only achieve a better rate-distortion trade-off, but also produce more robust segmentations by eliminating those mis-classifications which can cause severe artifacts. At similar bit rates, our MRC compression with the rate- distortion based segmentation can achieve a much higher subjective quality than state-of-the-art compression algorithms, such as JPEG and JPEG-2000.


international conference on image processing | 2001

Rate distortion optimized document coding using resolution enhanced rendering

Guotong Feng; Hui Cheng; Charles A. Bouman

Raster document coders are typically based on the use of a binary mask layer that efficiently encodes the text and graphic content. While these methods can yield much higher compression ratios than natural image compression methods, the binary representation tends to distort fine document details, such as thin lines, and text edges. In this paper, we describe a method for encoding and decoding the binary mask layer that substantially improves the decoded document quality at a fixed bit rate. This method, which we call resolution enhanced rendering (RER), works by adaptively dithering the encoded binary mask, and then applying a nonlinear predictor to decode a gray level mask at the same or higher resolution. We present experimental results illustrating that the RER method can substantially improve document quality at high compression ratios.


electronic imaging | 2003

Retaining color fidelity in multiple-generation reproduction using digital watermarks

Zhigang Fan; Shen-ge Wang; Hui Cheng

In most existing color reproduction systems, color correction is performed in an open-looped fashion. For multiple generation color copying, color fidelity cannot be guaranteed as the errors introduced in color correction may accumulate. In this paper, we propose a method of solving the error accumulation problem by embedding color information as invisible digital watermark in hardcopies. When the hardcopy is scanned, the embedded information can be retrieved to provide real-time calibration. As the method is close-looped in nature, it may reduce error accumulation and improve color fidelity, particularly when copies go through multiple generation reproduction.

Collaboration


Dive into the Hui Cheng's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lei Cao

University of Mississippi

View shared research outputs
Top Co-Authors

Avatar

Zhihai He

University of Missouri

View shared research outputs
Researchain Logo
Decentralizing Knowledge