Michael Lightstone
University of California, Santa Barbara
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Michael Lightstone.
IEEE Transactions on Image Processing | 1996
Eduardo Abreu; Michael Lightstone; Sanjit K. Mitra; Kaoru Arakawa
A new framework for removing impulse noise from images is presented in which the nature of the filtering operation is conditioned on a state variable defined as the output of a classifier that operates on the differences between the input pixel and the remaining rank-ordered pixels in a sliding window. As part of this framework, several algorithms are examined, each of which is applicable to fixed and random-valued impulse noise models. First, a simple two-state approach is described in which the algorithm switches between the output of an identity filter and a rank-ordered mean (ROM) filter. The technique achieves an excellent tradeoff between noise suppression and detail preservation with little increase in computational complexity over the simple median filter. For a small additional cost in memory, this simple strategy is easily generalized into a multistate approach using weighted combinations of the identity and ROM filter in which the weighting coefficients can be optimized using image training data. Extensive simulations indicate that these methods perform significantly better in terms of noise suppression and detail preservation than a number of existing nonlinear techniques with as much as 40% impulse noise corruption. Moreover, the method can effectively restore images corrupted with Gaussian noise and mixed Gaussian and impulse noise. Finally, the method is shown to be extremely robust with respect to the training data and the percentage of impulse noise.
IEEE Transactions on Circuits and Systems for Video Technology | 1996
Thomas Wiegand; Michael Lightstone; D. Mukherjee; T.G. Campbell; Sanjit K. Mitra
This paper addresses the problem of encoder optimization in a macroblock-based multimode video compression system. An efficient solution is proposed in which, for a given image region, the optimum combination of macroblock modes and the associated mode parameters are jointly selected so as to minimize the overall distortion for a given bit-rate budget. Conditions for optimizing the encoder operation are derived within a rate-constrained product code framework using a Lagrangian formulation. The instantaneous rate of the encoder is controlled by a single Lagrange multiplier that makes the method amenable to mobile wireless networks with time-varying capacity. When rate and distortion dependencies are introduced between adjacent blocks (as is the case when the motion vectors are differentially encoded and/or overlapped block motion compensation is employed), the ensuing encoder complexity is surmounted using dynamic programming. Due to the generic nature of the algorithm, it can be successfully applied to the problem of encoder control in numerous video coding standards, including H.261, MPEG-1, and MPEG-2. Moreover, the strategy is especially relevant for very low bit rate coding over wireless communication channels where the low dimensionality of the images associated with these bit rates makes real-time implementation very feasible. Accordingly, in this paper, the method is successfully applied to the emerging H.263 video coding standard with excellent results at rates as low as 8.0 Kb per second. Direct comparisons with the H.263 test model, TMN5, demonstrate that gains in peak signal-to-noise ratios (PSNR) are achievable over a wide range of rates.
international conference on image processing | 1995
Thomas Wiegand; Michael Lightstone; T.G. Campbell; Sanjit K. Mitra
A method for efficiently selecting the operating modes within a block-based multi-mode video compression system is described. For a given image region, the optimum combination of modes is selected so as to minimize the overall distortion for a given bit-rate budget. Necessary conditions for optimizing the encoder operation are derived within a rate-constrained product code framework. When rate and distortion dependencies exist between adjacent blocks, the ensuing encoder complexity is surmounted using a dynamic programming strategy based on the Viterbi algorithm so as to achieve the optimum selection of macroblock modes. Results are provided for the emerging H.263 video coding standard.
Multidimensional Systems and Signal Processing | 1997
Michael Lightstone; Eric Majani; Sanjit K. Mitra
Biorthogonal and orthogonal filter pairs derived from the family of binomial product filters are considered for wavelet transform implementation with the goal of high performance lossy image compression. Using experimental rate-distortion performance as the final measure of comparison, a number of new and existing filters are presented with excellent image coding capabilities. In addition, numerous filter attributes such as orthonormality, transition band sharpness, coding gain, low-band reconstruction error, regularity, and vanishing moments are assessed to determine their importance with regards to the fidelity of the decoded images. While image data compression is specifically addressed, many of the proposed techniques are applicable to other coding applications.
visual communications and image processing | 1994
Michael Lightstone; Eric Majani
Biorthogonal and orthogonal filter pairs derived from the class of binomial product filters are considered for wavelet transform implementation with the goal of high performance lossy compression. To help narrow the potential candidate filters, a number of design objectives based on filter frequency response and orthonormality are introduced with final selection being determined by experimental rate-distortion performance. While image data compression is specifically addressed, many of the proposed techniques are applicable to other coding applications.
IEEE Transactions on Image Processing | 1997
Michael Lightstone; Sanjit K. Mitra
An adaptive vector quantization (VQ) scheme with codebook transmission is derived for the variable-rate source coding of image data using an entropy-constrained Lagrangian framework. Starting from an arbitrary initial codebook C(I) available to both the encoder and decoder, the proposed algorithm iteratively generates an improved operational codebook C(0) that is well adapted to the statistics of a particular image or subimage. Unlike other approaches, the rate-distortion trade-offs associated with the transmission of updated code vectors to the decoder are explicitly considered in the design. In all cases, the algorithm guarantees that the operational codebook C(0) will have rate-distortion performance (including all side-information) better than or equal to that of any initial codebook C(I). When coding the Barbara image, improvement at all rates is demonstrated with observed gains of up to 3 dB in peak signal-to-noise ratio (PSNR). Whereas in general the algorithm is multipass in nature, encoding complexity can be mitigated without an exorbitant rate-distortion penalty by restricting the total number of iterations. Experiments are provided that demonstrate substantial rate-distortion improvement can be achieved with just a single pass of the algorithm.
signal processing systems | 1997
Michael Lightstone; Sanjit K. Mitra
The optimal design of quadtree-based codecs is addressed. Until now, work in this area has focused on the optimization of the quadtree structure for a given set of leaf quantizers while neglecting the design of the quantizers themselves. In cases where the leaf quantizers have been considered, codebooks have been optimized without regard to the ultimate quadtree segmentation. However, it is not sufficient to consider each problem independently, as separate optimization leads to an overall suboptimal solution. Rather, joint design of the quadtree structure and the leaf codebooks must be considered for overall optimality. The method we suggest is a “quadtree” constrained version of the entropy-constrained vector quantization design method. To this end, a centroid condition for the leaf codebooks is derived that represents a necessary optimality condition for variable-rate quadtree coding. This condition, when iterated with the optimal quadtree segmentation strategy of Sullivan and Baker results in a monotonically descending rate-distortion cost function, and consequently, an (at least locally) optimal quadtree solution.
visual communications and image processing | 1994
Michael Lightstone; Sanjit K. Mitra
A method for optimal variable rate mean-gain-shape vector quantization (MGSVQ) is presented with application to image compression. Conditions are derived within an entropy- constrained product code framework that result in an optimal bit allocation between mean, gain, and shape vectors at all rates. An extension to MGSVQ called hierarchical mean-gain- shape vector quantization (HMGSVQ) is similarly introduced. By considering statistical dependence between adjacent means, this method is able to provide improvement in rate- distortion performance over traditional MGSVQ, especially at low bit rates. Simulation results are provided to demonstrate the rate-distortion performance of MGSVQ and HMGSVQ for image data.
international symposium on circuits and systems | 1995
Michael Lightstone; Eduardo Abreu; Sanjit K. Mitra; Kaoru Arakawa
A new framework for removing impulse noise from images is presented in which the nature of the filtering operation is conditioned on a state variable. As part of this state-based framework, several sliding-window algorithms are examined, each of which is applicable to fixed and random-valued impulse noise models. First, a simple two-state approach is described in which the current state is computed according to the output of a simple classifier that operates on the differences between the input pixel and the remaining rank-ordered pixels in the current window. Based on the value of the state variable, the algorithm switches between the output of an identity filter and an order-statistic (OS) filter. For a small additional cost in memory, this simple strategy is easily generalized into a multi-state approach using weighted combinations of the identity and OS filters in which the weighting coefficients can be optimized using image training data. Extensive simulations indicate that these methods perform significantly better in terms of noise suppression and detail preservation than a number of existing nonlinear techniques with as much as thirty percent impulse noise. Finally, the method is shown to be extremely robust with respect to the training data and the percentage of impulse noise.
visual communications and image processing | 1993
Stefan Thurnhofer; Michael Lightstone; Sanjit K. Mitra
We propose a novel method for image interpolation which adapts to the local characteristics of the image in order to facilitate perfectly smooth edges. Features are classified into three categories (constant, oriented, and irregular). For each class we use a different zooming method that interpolates this feature in a visually optimized manner. Furthermore, we employ a nonlinear image enhancement which extracts perceptually important details from the original image and uses these in order to improve the visual impression of the zoomed images. Our results compare favorably to standard lowpass interpolation algorithms like bilinear, diamond- filter, or B-spline interpolation. Edges and details are much sharper and aliasing effects are eliminated. In the frequency domain we can clearly see that our adaptive algorithm not only suppresses the undesired spectral components that are folded down in the upsampling process. It is also capable of replacing them with new estimates, which accounts for the increased image sharpness. One application of this interpolation method is spatial interlaced-to- progressive conversion. Here, it yields again more pleasing images than comparable algorithms.