Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Karen L. Oehler is active.

Publication


Featured researches published by Karen L. Oehler.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1995

Combining image compression and classification using vector quantization

Karen L. Oehler; Robert M. Gray

We describe a method of combining classification and compression into a single vector quantizer by incorporating a Bayes risk term into the distortion measure used in the quantizer design algorithm. Once trained, the quantizer can operate to minimize the Bayes risk weighted distortion measure if there is a model providing the required posterior probabilities, or it can operate in a suboptimal fashion by minimizing the squared error only. Comparisons are made with other vector quantizer based classifiers, including the independent design of quantization and minimum Bayes risk classification and Kohonens LVQ. A variety of examples demonstrate that the proposed method can provide classification ability close to or superior to learning VQ while simultaneously providing superior compression performance. >


Proceedings of the IEEE | 1993

Using vector quantization for image processing

Pamela C. Cosman; Karen L. Oehler; Eve A. Riskin; Robert M. Gray

A review is presented of vector quantization, the mapping of pixel intensity vectors into binary vectors indexing a limited number of possible reproductions, which is a popular image compression algorithm. Compression has traditionally been done with little regard for image processing operations that may precede or follow the compression step. Recent work has used vector quantization both to simplify image processing tasks, such as enhancement classification, halftoning, and edge detection, and to reduce the computational complexity by performing the tasks simultaneously with the compression. The fundamental ideas of vector quantization are explained, and vector quantization algorithms that perform image processing are surveyed. >


data compression conference | 1993

Combining image classification and image compression using vector quantization

Karen L. Oehler; Robert M. Gray

The goal is to produce codes where the compressed image incorporates classification information without further signal processing. This technique can provide direct low level classification or an efficient front end to more sophisticated full-frame recognition algorithms. Vector quantization is a natural choice because two of its design components, clustering and tree-structured classification methods, have obvious applications to the pure classification problem as well as to the compression problem. The authors explicitly incorporate a Bayes risk component into the distortion measure used for code design in order to permit a tradeoff of mean squared error with classification error. This method is used to analyze simulated data, identify tumors in computerized tomography lung images, and identify man-made regions in aerial images.<<ETX>>


international conference on acoustics, speech, and signal processing | 1993

Mean-gain-shape vector quantization

Karen L. Oehler; Robert M. Gray

A mean-gain-shape product code which obtains the minimum distortion reproduction vector by successive encoding in each of the three codebooks is presented. Pruned tree-structured vector quantizers (PTSVQs) are used to provide variable rate codes at low encoding complexity. Simultaneous pruning of the three codebooks provides optimal bit allocation. Prediction and concatenation are used to take advantage of interblock correlation. The results compare favorably with those of other tree-structured VQ methods. The algorithm produces quantized images of good quality with low encoding complexity and reduced memory requirements.<<ETX>>


international conference on image processing | 1997

A simple rate-distortion model, parameter estimation, and application to real-time rate control for DCT-based coders

Jennifer L. H. Webb; Karen L. Oehler

This paper describes a simple rate-distortion model, shows how the model parameters can be estimated using Kalman filtering, and suggests how this information may be used for rate control in real-time systems. Real-time parameter estimation is more robust with the simplified model, and the model yields useful insights. Also, the suggested Kalman filter is computationally efficient, and adapts to changes in the video sequence. Several uses of the model and parameter estimates are suggested for real-time rate control.


asilomar conference on signals, systems and computers | 1991

Classification using vector quantization

Karen L. Oehler; Pamela C. Cosman; Robert M. Gray; J. May

The authors describe a simple technique for combining vector quantization and low level classification of images. The goal is to classify automatically certain simple features in an image as part of the compression process to enhance their appearance in the reconstructed image. Images in the training sequence are divided into blocks and each block is classified into a particular class by a human observer. This knowledge is used when designing the code-book so that both small average distortion and accurate implicit classification are achieved. The codebook can also be designed to have different average distortions for the different classes. The technique is a variation on a variable rate tree-structured vector quantizer which is grown by splitting a single terminal node at each iteration. The splitting criterion selection allows tradeoffs among compression rate, distortion, and misclassification rate.<<ETX>>


data compression conference | 1997

Region-based video coding with embedded zero-trees

Jie Liang; Iole Moccagatta; Karen L. Oehler

Summary form only given. In this paper, we describe a region-based video coding algorithm that is currently under investigation for inclusion in the emerging MPEG4 standard. This algorithm was incorporated in a submission that scored highly in the MPEG4 subjective tests of November 1995 (Talluri et al. 1997). Good coding efficiency is achieved by combining motion segmented region-based coding with the Shapiros embedded zero-tree wavelet (EZW) method.


international conference on image processing | 1997

Macroblock quantizer selection for H.263 video coding

Karen L. Oehler; Jennifer L. H. Webb

We consider two MSE-based strategies for selecting the best quantization parameter (Q) for each coded macroblock in H.263 video coding. In the first study, Q was adjusted to match a target mean-squared error (MSE) for each block. The goal was to avoid having large errors in a few macroblocks that detract from the overall perceived image quality. Effectively, we minimized the peak MSE over all macroblocks. In the second study, we sought to minimize the average MSE, modifying a gradient-search technique that has been applied to H.261. The choice of quantizers was limited by the H.263 syntactical constraints on changes in Q between successively-coded macroblocks. The results were compared with fixed Q encoding. The resulting sequences displayed only small perceptual differences. For the cases studied, the primary advantage of varying Q was to control the delay.


international conference on image processing | 1996

Region-based wavelet compression for very low bit rate video coding

Karen L. Oehler

We present a region-based wavelet compression method for encoding video sequences at very low bit rates suitable for videophone applications. By selectively coding texture information in the regions of interest, we avoid spending bits in the unimportant regions and improve video quality. The embedded zerotree wavelet algorithm is extended to efficiently represent the wavelet coefficients within the regions of interest while exploiting redundancy between subbands. The locations of the regions of interest are selected during the motion estimation process and are transmitted as part of the macroblock type. This technique can provide better quality compared to the new H.263 videophone compression standard. This technique was also incorporated in a more general object-based compression scheme which obtained high scores during the recent MPEG-4 subjective quality evaluations.


visual communications and image processing | 1991

Tree-structured vector quantization with input-weighted distortion measures

Pamela C. Cosman; Karen L. Oehler; Amanda A. Heaton; Robert M. Gray

A greedy tree-growing algorithm is used in conjunction with an input-dependent weighted distortion measure to develop a tree-structured vector quantizer. Vectors in the training set are classified, and weights are assigned to the classes. The resulting weighted distortion measure forces the tree to develop better representations for those classes that are considered important. Results on medical images and USC database images are presented. A tree- structured vector quantizer grown in a similar manner can be used for preliminary classification as well as compression.

Collaboration


Dive into the Karen L. Oehler's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eve A. Riskin

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

J. May

Stanford University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge