Robert R. Estes
University of California, Davis
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Robert R. Estes.
Optical Engineering | 1996
Jian Lu; V. Ralph Algazi; Robert R. Estes
Author(s): Lu, Jian; Algazi, Ralph; Estes, Robert R. | Abstract: We compare several wavelet-based coders in the encoding of still images. Two image quality metrics are used in our comparative study: a perception-based, quantitative picture quality scale and the conventional distortion measure, peak signal-to-noise ratio. Coders are evaluated in the rate-distortion sense. The effects of different wavelets, quantizers, and encoders are assessed individually. Two representative wavelets, three quantizers, three encoders, and the combinations of these components are compared. Our results provide insight into the design issues of optimizing wavelet coders, as well as a good reference for application developers to choose from an increasingly large family of wavelet coders for their applications.
SPIE's 1995 International Symposium on Optical Science, Engineering, and Instrumentation | 1995
V. Ralph Algazi; Robert R. Estes
Image coding requires an effective representation of images to provide dimensionality reduction, a quantization strategy to maintain quality, and finally the error free encoding of quantized coefficients. In the coding of quantized coefficients, Huffman coding and arithmetic coding have been used most commonly and are suggested as alternatives in the JPEG standard. In some recent work, zerotree coding has been proposed as an alternate method, that considers the dependece of quantized coefficients from subband to subband, and thus appears as a generalization of the context-based approach often used with arithmetic coding. In this paper, we propose to review these approaches and discuss them as special cases of an analysis based approach to the coding of coefficients. The requirements on causality and computational complexity implied by arithmetic and zerotree coding will be studied and other schemes proposed for the choice of the predictive coefficient contexts that are suggested by image analysis.
IS&T/SPIE's Symposium on Electronic Imaging: Science & Technology | 1995
Adel I. El-Fallah; Gary E. Ford; V. Ralph Algazi; Robert R. Estes
We have recently proposed the use of geometry in image processing by representing an image as a surface in 3-space. The linear variations in intensity (edges) were shown to have a nondivergent surface normal. Exploiting this feature we introduced a nonlinear adaptive filter that only averages the divergence in the direction of the surface normal. This led to an inhomogeneous diffusion (ID) that averages the mean curvature of the surface, rendering edges invariant while removing noise. This mean curvature diffusion (MCD) when applied to an isolated edge imbedded in additive Gaussian noise results in complete noise removal and edge enhancement with the edge location left intact. In this paper we introduce a new filter that will render corners (two intersecting edges), as well as edges, invariant to the diffusion process. Because many edges in images are not isolated the corner model better represents the image than the edge model. For this reason, this new filtering technique, while encompassing MCD, also outperforms it when applied to images. Many applications will benefit from this geometrical interpretation of image processing, and those discussed in this paper include image noise removal, edge and/or corner detection and enhancement, and perceptually transparent coding.
international conference on acoustics, speech, and signal processing | 1992
Gary E. Ford; Robert R. Estes; Hong Chen
In image processing operations involving changes in the sampling grid, including increases or decreases in resolution, care must be taken to preserve image structure. Structure includes regions of high contrast, such as edges, streaks, or corners, all indicated by large gradients. The authors consider the use of local spatial analysis for both sampling and interpolation. Anisotropic diffusion is considered as a directional smoothing technique that preserves structure. This is used in conjunction with a method of directional interpolation, which is also based on structure analysis.<<ETX>>
SPIE's 1995 International Symposium on Optical Science, Engineering, and Instrumentation | 1995
V. Ralph Algazi; Gary E. Ford; Adel I. El-Fallah; Robert R. Estes
In previous work, we reported on the benefits of noise reduction prior to coding of very high quality images. Perceptual transparency can be achieved with a significant improvement in compression as compared to error free codes. In this paper, we examine the benefits of preprocessing when the quality requirements are not very high, and perceptible distortion results. The use of data dependent anisotropic diffusion that maintains image structure, edges, and transitions in luminance or color is beneficial in controlling the spatial distribution of errors introduced by coding. Thus, the merit of preprocessing is for the control of coding errors. In this preliminary study, we only consider preprocessing prior to the use of the standard JPEG and MPEG coding techniques.
Signal Processing | 1998
V. Ralph Algazi; Niranjan Avadhanam; Robert R. Estes
Abstract Traditional quality measures for image coding, such as the peak signal-to-noise ratio, assume that the preservation of the original image is the desired goal. However, pre-processing images prior to encoding, designed to remove noise or unimportant detail, can improve the overall performance of an image coder. Objective image quality metrics obtained from the difference between the original and coded images cannot properly assess this improved performance. This paper proposes a new methodology for quality metrics that differentially weighs the changes in the image due to pre-processing and encoding. These new quality measures establish the value of pre-processing for image coding and quantitatively determine the performance improvement that can be thus achieved by JPEG and wavelet coders.
Electronic Imaging: Science and Technology | 1996
V. Ralph Algazi; Gary E. Ford; Hong Chen; Robert R. Estes
Coding techniques, such as JPEG and MPEG, result in visibly degraded images at high compression. The coding artifacts are strongly correlated with image features and result in objectionable structured errors. Among structured errors, the reduction of the end of block effect in JPEG encoding has received recent attention, with advantage being taken of the known location of block boundaries. However, end of block errors are not apparent in subband or wavelet coded images. Even for JPEG coding, end of block errors are not perceptible for moderate compression, while other artifacts are still quite apparent and disturbing. In previous work, we have shown that the quality of images can be in general improved by analysis based processing and interpolation. In this paper, we present a new approach that addresses the reduction of the end of block errors as well as other visible artifacts that persist at high image quality. We demonstrate that a substantial improvement of image quality is possible by analysis based post-processing.
electronic imaging | 1997
V. Ralph Algazi; Robert R. Estes
In recent work, we have examined the performance of wavelet coders using a perceptually relevant image quality metric, the picture quality scale (PQS). In that study, we considered some of the design options available with respect to choice of wavelet basis, quantizer, and method for error- free encoding of the quantized coefficients, including the EZW methodology. A specific combination of these design options provides the best trade off between performance and PQS quality. Here, we extend this comparison by evaluating the performance of JPEG and the previously chosen optimal wavelet scheme, focusing principally on the high quality range.
SPIE's 1994 International Symposium on Optics, Imaging, and Instrumentation | 1994
V. Ralph Algazi; Gary E. Ford; Robert R. Estes; Adel I. El-Fallah; Azfar Najmi
In the perceptually transparent coding of images, we use representation and quantization strategies that exploit properties of human perception to obtain an approximate digital image indistinguishable from the original. This image is then encoded in an error free manner. The resulting coders have better performance than error free coding for a comparable quality. Further, by considering changes to images that do not produce perceptible distortion, we identify image characteristics onerous for the encoder, but perceptually unimportant. Once such characteristic is the typical noise level, often imperceptible, encountered in still images. Thus, we consider adaptive noise removal to improve coder performance, without perceptible degradation of quality. In this paper, several elements contribute to coding efficiency while preserving image quality: adaptive noise removal, additive decomposition of the image with a high activity remainder, coarse quantization of the remainder, progressive representation of the remainder, using bilinear or directional interpolation methods, and efficient encoding of the sparse remainder. The overall coding performance improvement due to noise removal and the use of a progressive code is about 18%, as compared to our previous results for perceptually transparent coders. The compression ratio for a set of nine test images is 3.72 for no perceptible loss of quality.
SPIE's 1995 Symposium on OE/Aerospace Sensing and Dual Use Photonics | 1995
Jian Lu; V. Ralph Algazi; Robert R. Estes
Image coding is one of the most visible applications of wavelets. There has been an increasing number of reports each year since the late 1980s on the design of new wavelet coders and variations to existing ones. In this paper, we report some results from our comparative study of wavelet image coders using a perception-based, quantitative picture quality scale as the distortion measure. Coders are evaluated in rate-distortion sense; the influences of different wavelets, quantizers, and encoders are assessed individually. Our results provide an insight into the design issues of optimizing wavelet coders, as well as a good reference for application developers to choose from an increasingly large family of wavelet coders for their applications.