Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Heidi A. Peterson is active.

Publication


Featured researches published by Heidi A. Peterson.


SPIE/IS&T 1992 Symposium on Electronic Imaging: Science and Technology | 1992

Luminance-model-based DCT quantization for color image compression

Albert J. Ahumada; Heidi A. Peterson

A model is developed to approximate visibility thresholds for discrete cosine transform (DCT) coefficient quantization error based on the peak-to-peak luminance of the error image. Experimentally measured visibility thresholds for R, G, and B DCT basis functions can be predicted by a simple luminance-based detection model. This model allows DCT coefficient quantization matrices to be designed for display conditions other than those of the experimental measurements: other display luminances, other veiling luminances, and other spatial frequencies (different pixel spacings, viewing distances, and aspect ratios).


IS&T/SPIE's Symposium on Electronic Imaging: Science and Technology | 1993

Improved detection model for DCT coefficient quantization

Heidi A. Peterson; Albert J. Ahumada; Andrew B. Watson

A detection model is developed to predict visibility thresholds for discrete cosine transform coefficient quantization error, based on the luminance and chrominance of the error. The model is an extension of a previously proposed luminance-based model, and is based on new experimental data. In addition to the luminance-only predictions of the previous model, the new model predicts the detectability of quantization error in color space directions in which chrominance error plays a major role. This more complete model allows DCT coefficient quantization matrices to be designed for display conditions other than those of the experimental measurements: other display luminances, other veiling luminances, other spatial frequencies (different pixel sizes, viewing distances, and aspect ratios), and other color directions.


9th Computing in Aerospace Conference | 1993

A visual detection model for DCT coefficient quantization

Albert J. Ahumada; Heidi A. Peterson

The discrete cosine transform (DCT) is widely used in image compression and is part of the JPEG and MPEG compression standards. The degree of compression and the amount of distortion in the decompressed image are controlled by the quantization of the transform coefficients. The standards do not specify how the DCT coefficients should be quantized. One approach is to set the quantization level for each coefficient so that the quantization error is near the threshold of visibility. Results from previous work are combined to form the current best detection model for DCT coefficient quantization noise. This model predicts sensitivity as a function of display parameters, enabling quantization matrices to be designed for display situations varying in luminance, veiling light, and spatial frequency related conditions (pixel size, viewing distance, and aspect ratio). It also allows arbitrary color space directions for the representation of color. A model-based method of optimizing the quantization matrix for an individual image was developed. The model described above provides visual thresholds for each DCT frequency. These thresholds are adjusted within each block for visual light adaptation and contrast masking. For given quantization matrix, the DCT quantization errors are scaled by the adjusted thresholds to yield perceptual errors. These errors are pooled nonlinearly over the image to yield total perceptual error. With this model one may estimate the quantization matrix for a particular image that yields minimum bit rate for a given total perceptual error, or minimum perceptual error for a given bit rate. Custom matrices for a number of images show clear improvement over image-independent matrices. Custom matrices are compatible with the JPEG standard, which requires transmission of the quantization matrix.


international conference on acoustics, speech, and signal processing | 1995

Image compression using spatial prediction

Ephraim Feig; Heidi A. Peterson; Viresh Ratnakar

This paper describes a new image compression technique, referred to as spatial prediction. Spatial prediction works in a manner similar to fractal-based image compression techniques, and is in fact a result of several experiments that we conducted to gain a better understanding of why fractal compression works. Spatial prediction compresses an image by storing, for each image block, either the quantized discrete cosine transform (DCT) coefficients or the parameters of an affine transformation that constructs the block using another image block from the already encoded portion of the image. This technique does not require contractivity in the at fine transformations and performs as well as or better than fractal compression. Spatial prediction does not out-perform pure DCT-based techniques (such as JPEG) in terms of PSNR/bit-rate tradeoff. However, at very low bit rates it results in far fewer blocky artifacts and markedly better visual quality.


Human Vision, Visual Processing, and Digital Display | 1989

Image Segmentation Using Human Visual System Properties With Applications In Image Compression

Heidi A. Peterson; Sarah A. Rajala; Edward J. Delp

Many image compression techniques involve segmentation of a gray level image. With such techniques, information is extracted that describes the regions in the segmented image, and this information is then used to form a coded version of the image. In this paper we present a region-growing-based segmentation technique that incorporates human visual system properties, and describe the use of this technique in image compression. We also discuss the effect of requantizing a segmented image. Requantization of a segmented image is useful because it can lead to a reduction in the number of bits required to code the description of the regions in the segmented image. This results in a lower data rate. We show that the number of gray levels in a segmented image can be reduced by a factor of at least twelve, without noticeable degradation in the quality of the segmented image. This result is attributable to human visual system properties having to do with contrast sensitivity, and to the fact that requantization of a segmented image does not usually reduce significantly the number of distinct segments in the image. In addition, in this paper we explore the relationship between the number of segments in an image, and the extent of requantization possible before noticeable degradation occurs in the image. Finally, we discuss the impact of the above results on image compression algorithms, and present some experimental results.


global communications conference | 1991

Human visual system properties applied to image segmentation for image compression

Heidi A. Peterson; Sarah A. Rajala; Edward J. Delp

The authors describe a gray-level image segmentation method for use in segmentation-based image compression. The method consists of two steps: a variation of centroid-linkage region growing to perform the initial segmentation of the image, followed by nonlinear filtering to eliminate visually insignificant image segments. Both steps take advantage of human visual system properties to improve allocation of image segments. Subjective experiments have been conducted to determine the interactions and optimum balance between the steps. It is shown that the proposed two-step approach produces substantially better-quality segmented images than region growing used alone.<<ETX>>


Storage and Retrieval for Image and Video Databases | 1993

An Improved Detection Model for DCT Coefficient Quantization

Heidi A. Peterson; Albert J. Ahumada; Andrew B. Watson


Storage and Retrieval for Image and Video Databases | 1991

Quantization of color image components in the DCT domain

Heidi A. Peterson; Huei Peng; J. H. Morgan; William B. Pennebaker


Archive | 1993

The visibility of DCT quantization noise

Heidi A. Peterson; Albert J. Ahumada; Andrew B. Watson


Archive | 1996

Motion video compression system with novel adaptive quantization

Elliot Neil Linzer; Heidi A. Peterson

Collaboration


Dive into the Heidi A. Peterson's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrew B. Watson

Cedars-Sinai Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sarah A. Rajala

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Walter Bender

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge