Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mareboyana Manohar is active.

Publication


Featured researches published by Mareboyana Manohar.


data compression conference | 1992

Progressive vector quantization of multispectral image data using a massively parallel SIMD machine

Mareboyana Manohar; James C. Tilton

Progressive transmission (PT) using vector quantization (VQ) is called progressive vector quantization (PVQ) and is used for efficient telebrowsing and dissemination of multispectral image data via computer networks. Theoretically any compression technique can be used in PT mode. Here VQ is selected as the baseline compression technique because the VQ encoded images can be decoded by simple table lookup process so that the users are not burdened with computational problems for using compressed data. Codebook generation or training phase is the most critical part of VQ. Two different algorithms have been used for this purpose. The first of these is based on well-known Linde-Buzo-Gray (LBG) algorithm. The other one is based on self organizing feature maps (SOFM). Since both training and encoding are computationally intensive tasks, the authors have used MasPar, a SIMD machine for this purpose. The multispectral imagery obtained from Advanced Very High Resolution Radiometer (AVHRR) instrument images form the testbed. The results from these two VQ techniques have been compared in compression ratios for a given mean squared error (MSE). The number of bytes required to transmit the image data without loss using this progressive compression technique is usually less than the number of bytes required by standard unix compress algorithm.<<ETX>>


IEEE Transactions on Image Processing | 1996

Progressive vector quantization on a massively parallel SIMD machine with application to multispectral image data

Mareboyana Manohar; James C. Tilton

This correspondence discusses a progressive vector quantization (VQ) compression approach, which decomposes image data into a number of levels using full-search VQ. The final level is losslessly compressed, enabling lossless reconstruction. The computational difficulties are addressed by implementation on a massively parallel SIMD machine. We demonstrate progressive VQ on multispectral imagery obtained from the advanced very high resolution radiometer (AVHRR) and other earth-observation image data, and investigate the tradeoffs in selecting the number of decomposition levels and codebook training method.


data compression conference | 1991

Compression experiments with AVHRR data

James C. Tilton; D. Han; Mareboyana Manohar

This paper describes several data compression approaches for producing browse data from AVHRR data, and evaluates these approaches qualitatively and quantitatively. They include a hierarchical data compression scheme based on progressively finer image segmentations, and various vector quantization approaches. They are evaluated in terms of compression ratio (or data rate), computational requirements, and the image and analysis errors introduced due to lossy compression. Analysis products evaluated for error include cloud coverage area and sea surface temperatures.<<ETX>>


SPIE's 1993 International Symposium on Optics, Imaging, and Instrumentation | 1993

Centering of context-dependent components of prediction-error distributions of images

Glen G. Langdon; Mareboyana Manohar

Traditionally the distribution of the prediction error has been treated as the single-parameter Laplacian distribution, and based on this assumption one can design a set of Huffman codes selected through an estimate of the parameter. More recently, the prediction error distribution has been compared to the Gaussian distribution about mean zero when the value is relatively high. However when using nearly quantized prediction errors in the context model, the relatively high variance case is seen to merge conditional distributions surrounding both positive edges and negative edges. Edge information is available respectively from large negative or positive prediction errors in the neighboring pixel positions. In these cases, the mean of the distribution is usually not zero. By separating these two cases, making appropriate assumptions on the mean of the context-dependent error distribution, and other techniques, additional cost-effective compression can be achieved.


Remote Sensing Reviews | 1994

Earth science data compression issues and activities

James C. Tilton; Mareboyana Manohar; Jeffrey A. Newcomer

Abstract Due to the large volume of Earth science data projected to be collected froms pace platforms over the next several years, Earth science data information systems will be facing significant problems in data transmission, storage, and dissemination. Data compression is one tool that can be used to overcome these problems; however, no single data compression approach is likely to be appropriate for all aspects of this problem. The data compression approaches employed must be tailored appropriately for each particular aspect of the information system. We describe here several data compression approaches that show promise for application to one or another aspect of the information system. We then highlight selected current attempts to apply data compression techniques to particular aspects of the Earth science data information system. We conclude by stating our belief that data compression technology will not only help the Earth science community deal more effectively with their data volume problem, bu...


SPIE's International Symposium on Optical Engineering and Photonics in Aerospace Sensing | 1994

Selecting image subbands for browsing scientific image databases

Kathleen G. Perez-Lopez; Arun K. Sood; Mareboyana Manohar

Managing massive databases of scientific images requires new techniques that address indexing visual content, providing adequate browse capabilities, and facilitating querying by image content. Subband decomposition of image data using wavelet filters is offered as an aid to solving each of these problems. It is fundamental to a vidual indexing scheme that constructs a pruned tree of significant subbands as a first level of index. Significance is determined by feature vectors including Markov random field statistics, in addition to more common measures of energy and entropy. Features are retained at the nodes of the pruned subband tree as a second level of index. Query images, indexed in the same manner as database images, are compared as closely as desired to database indexes. Browse images for matching images are transmitted to the user in the form of subband coefficients, which constitute the third level of index. These coefficients, chosen for their unique significance to the indexed image, are likely to contain valuable information for the subject area specialist. This paper presents the indexing scheme in detail, and reports some preliminary results of selecting subbands for reconstruction as browse images based on their significance for indexing purposes.


IEEE Transactions on Image Processing | 1999

Model-based vector quantization with application to remotely sensed image data

Mareboyana Manohar; James C. Tilton

Model-based vector quantization (MVQ) is introduced here as a variant of vector quantization (VQ). MVQ has the asymmetrical computational properties of conventional VQ, but does not require the use of pregenerated codebooks. This is a great advantage, since codebook generation is usually a computationally intensive process, and maintenance of codebooks for coding and decoding can pose difficulties. MVQ uses a simple mathematical model for mean removed errors combined with a human visual system model to generate parameterized codebooks. The error model parameter (lambda) is included with the compressed image as side information from which the same codebook is regenerated for decoding. As far as the user is concerned, MVQ is a codebookless VQ variant. After a brief introduction, the problems associated with codebook generation and maintenance are discussed. We then give a description of the MVQ algorithm, followed by an evaluation of the performance of MVQ on remotely sensed image data sets from NASA sources. The results obtained with MVQ are compared with other VQ techniques and JPEG/DCT. Finally, we demonstrate the performance of MVQ as a part of a progressive compression system suitable for use in an image archival and distribution installation.


SPIE's 1995 Symposium on OE/Aerospace Sensing and Dual Use Photonics | 1995

Model-based VQ for image data archival, retrieval, and distribution

Mareboyana Manohar; James C. Tilton

An ideal image compression technique for image data archival, retrieval and distribution would be one with the asymmetrical computational requirements of vector quantization (VQ), but without the complications arising from VQ codebooks. Codebook generation and maintenance are stumbling blocks which have limited the use of VQ as a practical image compression algorithm. Model-based VQ (MVQ), a variant of VQ described here, has the computational properties of VQ but does not require explicit codebooks. The codebooks are internally generated using mean removed error and human visual system (HVS) models. The error model assumed is the Laplacian distribution with mean, (lambda) , computed from a sample of the input image. A Laplacian distribution with mean, (lambda) , is generated with a uniform random number generator. These random numbers are grouped into vectors. These vectors are further conditioned to make them perceptually meaningful by filtering the DCT coefficients from each vector. The DCT coefficients are filtered by multiplying by a weight matrix that is found to be optimal for human perception. The inverse DCT is performed to produced the conditioned vectors for the codebook. The only image dependent parameter used in the generation of codebook is the mean, (lambda) , that is included in the coded file to repeat the codebook generation process for decoding.


Neural Networks for Perception#R##N#Human and Machine Perception | 1992

Compression of Remotely Sensed Images using Self Organizing Feature Maps

Mareboyana Manohar; James C. Tilton

Publisher Summary This chapter discusses the compression of remotely sensed images using self-organizing feature maps. Data compression is one tool that can be used to help overcome data transmission bandwidth limitations. However, for experimental remote sensing data, lossless data compression is required for any data that is to be actually fully analyzed by the researcher utilizing the data. Nonetheless, highly lossy data compression can be used by a researcher who just needs to browse through a large number of data sets, and moderately lossy data compression can be used for the final selection of data sets to be fully analyzed. As more familiarity is gained with particular data sets, lossy data compression algorithms could be designed that give significant compression while losing only non-essential information, essentially the noise, and retaining all the scientifically significant information. One way this could be accomplished would be by designing the data compression scheme as an integral part of the information extraction process, wherein the data compression is a form of conditioning of the data for analysis. Among lossy compression techniques, there are four important classes: (1) predictive coding techniques, (2) transform techniques, (3) hybrid coding, and (4) vector quantization.


Information Processing Letters | 1986

On probability of forest of quadtrees reducing to quadtrees

P.Srinivas Kumar; Mareboyana Manohar

Abstract Forest of quadtrees (FQT) is a variant of quadtrees (QT) that is used to represent a binary image of size 2n × 2n with better space efficiency compared to the latter. The space requirement of FQT depends on the type of regions contained in the image. In some cases, the FQT structure is as costly as QT structure in terms of storage. In this article we compute the probability of FQT structure reducing to QT structure with the assumption that each pixel in the binary image has equal probability to be black or white. It is observed that, in the case of binary images of practical sizes, the FQT structure reduces to QT structure in most of the cases.

Collaboration


Dive into the Mareboyana Manohar's collaboration.

Top Co-Authors

Avatar

James C. Tilton

Goddard Space Flight Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arun K. Sood

George Mason University

View shared research outputs
Top Co-Authors

Avatar

D. Han

Goddard Space Flight Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

P.Srinivas Kumar

Indian Space Research Organisation

View shared research outputs
Researchain Logo
Decentralizing Knowledge