Sharon M. Perlmutter
Stanford University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sharon M. Perlmutter.
Signal Processing | 1997
Sharon M. Perlmutter; Pamela C. Cosman; Robert M. Gray; Richard A. Olshen; Debra M. Ikeda; C. N. Adams; Bradley J. Betts; Mark B. Williams; Keren Perlmutter; Jia Li; Anuradha K. Aiyer; Laurie L. Fajardo; Robyn L. Birdwell; Bruce L. Daniel
Abstract The substitution of digital representations for analog images provides access to methods for digital storage and transmission and enables the use of a variety of digital image processing techniques, including enhancement and computer assisted screening and diagnosis. Lossy compression can further improve the efficiency of transmission and storage and can facilitate subsequent image processing. Both digitization (or digital acquisition) and lossy compression alter an image from its traditional form, and hence it becomes important that any such alteration be shown to improve or at least not damage the utility of the image in a screening or diagnostic application. One approach to demonstrating in a quantifiable manner that a specific image mode is at least equal to another is by clinical experiment simulating ordinary practice and suitable statistical analysis. In this paper we describe a general protocol for performing such a verification and present preliminary results of a specific experiment designed to show that 12 bpp digital mammograms compressed in a lossy fashion to 0.015 bpp using an embedded wavelet coding scheme result in no significant differences from the analog or digital originals.
asilomar conference on signals, systems and computers | 1991
Pamela C. Cosman; Keren Perlmutter; Sharon M. Perlmutter; Richard A. Olshen; Robert M. Gray
The authors examined vector quantizer performance as a function of training sequence size for tree-structured and full-search vector quantizers. The performance was measured by the mean-squared error between the input image and the quantizer output at a given bit rate. The training sequence size was measured either by the number of training images, or by the number of training vectors. When the training vectors were counted, they were selected randomly from among the training images. For every training sequence size, vector quantizers were developed from several different training sequences, and the distortion was calculated for different test sequences in a cross validation procedure. Preliminary results suggest that plots of distortion vs. number of training images follow an algebraic decay, as expected from analogous results of learning theory.<<ETX>>
international conference on image processing | 1994
Sharon M. Perlmutter; Chien-Wen Tseng; Pamela C. Cosman; King C.P. Li; Richard A. Olshen; Robert M. Gray
We investigated the effects of lossy image compression on measurement accuracy in magnetic resonance images. Thirty chest scans were compressed to five different levels using predictive pruned tree-structured vector quantization (predictive PTSVQ). Three radiologists measured the diameters of the four principal blood vessels on each image. Errors were analyzed relative to both an independent standard and personal performance on uncompressed images. Data were compared with both t and Wilcoxon tests. We conclude that for the purpose of measuring blood vessels in the chest, there is no significant difference in measurement accuracy when images are compressed up to 16:1 with predictive PTSVQ.<<ETX>>
asilomar conference on signals, systems and computers | 1992
Sharon M. Perlmutter; Keren Perlmutter; Pamela C. Cosman; Eve A. Riskin; Richard A. Olshen; Robert M. Gray
Unbalanced or pruned tree-structured vector quantization (PTSVQ), a variable-rate coding technique that tends to use more bits to code active regions of the image and fewer to code homogeneous ones, is developed based on a training sequence of typical images. A regression tree algorithm is used to segment the images of the training sequence using the x, y pixel location as a predictor for the intensity. This segmentation is used to partition the training data by region and generate separate codebooks for each region, and to allocate differing numbers of bits to the regions. Region-based classification requires no side information, as the decoder knows where in the image the current encoded block originated. These methods can enhance the perceptual quality of compressed images when compared with ordinary PTSVQ. Results for magnetic resonance data are shown.<<ETX>>
international conference on image processing | 1994
Sharon M. Perlmutter; Robert M. Gray
A novel algorithm is described for constructing a progressive, multiresolution compression code. The codec consists of nested levels of tree-structured vector quantizers (TSVQs) where the codebook for each level of the nested TSVQs is constructed from the terminal leaves of the TSVQ from the previous level. In order to generate a multiresolution output in a progressive manner, the codeword dimension at each levels TSVQ is greater than or equal to those of the previous levels. Pruning is performed on the nested TSVQs to achieve the bit allocation across the levels. The resulting pruned TSVQ provides a multiresolution output with low computational complexity at the decoder while simultaneously providing superior performance to ordinary pruned TSVQ at low bit rates.<<ETX>>
international conference on computer vision | 2009
Biswaroop Palit; Rakesh Nigam; Keren Perlmutter; Sharon M. Perlmutter
Recognizing and clustering similar faces are important for organizing digital photos. We present a novel clustering method based on spectral clustering to group faces in a photo album. The main contribution is the proposal of a distance metric that is robust to outlier features present in the facial images.
asilomar conference on signals, systems and computers | 1995
Keren Perlmutter; Won Tchoi; Sharon M. Perlmutter; Pamela C. Cosman
The use of region-based coding is explored with a wavelet/TSVQ structure. Several methods of generating the segmentation map are discussed, including a recursive segmentation procedure that does not require any side information. The method is investigated on computerized tomographic chest scans, where the images are segmented into three regions-the background, the chest wall region, and the chest organs region. The background is considered of no importance, the chest wall region is considered of low importance, and the chest organs region is considered of high importance. At 0.20 bits per pixel, region-based coding provides a 2.0 dB improvement in the chest organs region at the expense of degradation in the clinically less relevant regions.
asilomar conference on signals, systems and computers | 1994
Sharon M. Perlmutter; Robert M. Gray
An algorithm is described for constructing a finite-state compression code that is both progressive and multiresolution. The codec consists of nested levels of tree-structured vector quantizers (TSVQs) where the codebook for each level of the nested TSVQs is constructed from the terminal leaves of the TSVQ from the previous level. The first level of the TSVQ represents a finite-state next state function. The codeword dimension at the subsequent levels is greater than or equal to those of the previous levels. This property allows the codec to produce a multiresolution output in a progressive manner. Pruning is performed on the nested TSVQs to achieve the bit allocation across the levels. The resulting pruned TSVQ decoder operates entirely by successive table lookups, with no arithmetic computation. Furthermore, it provides superior performance to ordinary pruned TSVQ at low bit rates.<<ETX>>
asilomar conference on signals, systems and computers | 1994
Keren Perlmutter; Sharon M. Perlmutter; Michelle Effros; Robert M. Gray
A finite-state vector quantizer (FSVQ) is a multicodebook system in, which the current state (or codebook) is chosen as a function of the previously quantized vectors. The authors introduce a novel iterative algorithm for joint codebook and next state function design of full search finite-state vector quantizers. They consider the fixed-rate case, for which no optimal design strategy is known. A locally optimal set of codebooks is designed for the training data and then predecessors to the training vectors associated with each codebook are appropriately labelled and used in designing the classifier. The algorithm iterates between next state function and state codebook design until it arrives at a suitable solution. The proposed design consistently yields better performance than the traditional FSVQ design method (under identical state space and codebook constraints).<<ETX>>
Archive | 2006
Keren Perlmutter; Sharon M. Perlmutter; Joshua Alspector; Mark Everingham; Alex Holub; Andrew Zisserman; Pietro Perona