Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where B. S. Manjunath is active.

Publication


Featured researches published by B. S. Manjunath.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1996

Texture features for browsing and retrieval of image data

B. S. Manjunath; Wei-Ying Ma

Image content based retrieval is emerging as an important research area with application to digital libraries and multimedia databases. The focus of this paper is on the image processing aspects and in particular using texture information for browsing and retrieval of large image data. We propose the use of Gabor wavelet features for texture analysis and provide a comprehensive experimental evaluation. Comparisons with other multiresolution texture features using the Brodatz texture database indicate that the Gabor features provide the best pattern retrieval accuracy. An application to browsing large air photos is illustrated.


IEEE Transactions on Circuits and Systems for Video Technology | 2001

Color and texture descriptors

B. S. Manjunath; Jens-Rainer Ohm; Vinod V. Vasudevan; Akio Yamada

This paper presents an overview of color and texture descriptors that have been approved for the Final Committee Draft of the MPEG-7 standard. The color and texture descriptors that are described in this paper have undergone extensive evaluation and development during the past two years. Evaluation criteria include effectiveness of the descriptors in similarity retrieval, as well as extraction, storage, and representation complexities. The color descriptors in the standard include a histogram descriptor that is coded using the Haar transform, a color structure histogram, a dominant color descriptor, and a color layout descriptor. The three texture descriptors include one that characterizes homogeneous texture regions and another that represents the local edge distribution. A compact descriptor that facilitates texture browsing is also defined. Each of the descriptors is explained in detail by their semantics, extraction and usage. The effectiveness is documented by experimental results.


Graphical Models and Image Processing | 1995

Multisensor image fusion using the wavelet transform

Hui Li; B. S. Manjunath; Sanjit K. Mitra

Abstract The goal of image fusion is to integrate complementary information from multisensor data such that the new images are more suitable for the purpose of human visual perception and computer-processing tasks such as segmentation, feature extraction, and object recognition. This paper presents an image fusion scheme which is based on the wavelet transform. The wavelet transforms of the input images are appropriately combined, and the new image is obtained by taking the inverse wavelet transform of the fused wavelet coefficients. An area-based maximum selection rule and a consistency verification step are used for feature selection. The proposed scheme performs better than the Laplacian pyramid-based methods due to the compactness, directional selectivity, and orthogonality of the wavelet transform. A performance measure using specially generated test images is suggested and is used in the evaluation of different fusion methods, and in comparing the merits of different wavelet transform kernels. Extensive experimental results including the fusion of multifocus images, Landsat and Spot images, Landsat and Seasat SAR images, IR and visible images, and MRI and PET images are presented in the paper.


international conference on image processing | 1997

NeTra: a toolbox for navigating large image databases

Wei-Ying Ma; B. S. Manjunath

Abstract. We present here an implementation of NeTra, a prototype image retrieval system that uses color, texture, shape and spatial location information in segmented image regions to search and retrieve similar regions from the database. A distinguishing aspect of this system is its incorporation of a robust automated image segmentation algorithm that allows object- or region-based search. Image segmentation significantly improves the quality of image retrieval when images contain multiple complex objects. Images are segmented into homogeneous regions at the time of ingest into the database, and image attributes that represent each of these regions are computed. In addition to image segmentation, other important components of the system include an efficient color representation, and indexing of color, texture, and shape features for fast search and retrieval. This representation allows the user to compose interesting queries such as “retrieve all images that contain regions that have the color of object A, texture of object B, shape of object C, and lie in the upper of the image”, where the individual objects could be regions belonging to different images. A Java-based web implementation of NeTra is available at http://vivaldi.ece.ucsb.edu/Netra.


computer vision and pattern recognition | 1992

A feature based approach to face recognition

B. S. Manjunath; Rama Chellappa; C. von der Malsburg

A feature-based approach to face recognition in which the features are derived from the intensity data without assuming any knowledge of the face structure is presented. The feature extraction model is biologically motivated, and the locations of the features often correspond to salient facial features such as the eyes, nose, etc. Topological graphs are used to represent relations between features, and a simple deterministic graph-matching scheme that exploits the basic structure is used to recognize familiar faces from a database. Each of the stages in the system can be fully implemented in parallel to achieve real-time recognition. Experimental results for a 128*128 image with very little noise are evaluated.<<ETX>>


Nature Methods | 2012

Biological imaging software tools

Kevin W. Eliceiri; Michael R Berthold; Ilya G. Goldberg; Luis Ibáñez; B. S. Manjunath; Maryann E. Martone; Robert F. Murphy; Hanchuan Peng; Anne L. Plant; Badrinath Roysam; Nico Stuurman; Jason R. Swedlow; Pavel Tomancak; Anne E. Carpenter

Few technologies are more widespread in modern biological laboratories than imaging. Recent advances in optical technologies and instrumentation are providing hitherto unimagined capabilities. Almost all these advances have required the development of software to enable the acquisition, management, analysis and visualization of the imaging data. We review each computational step that biologists encounter when dealing with digital images, the inherent challenges and the overall status of available software for bioimage informatics, focusing on open-source options.


IEEE Transactions on Image Processing | 2001

An efficient color representation for image retrieval

Yining Deng; B. S. Manjunath; Charles S. Kenney; Michael S. Moore; Hyundoo Shin

A compact color descriptor and an efficient indexing method for this descriptor are presented. The target application is similarity retrieval in large image databases using color. Colors in a given region are clustered into a small number of representative colors. The feature descriptor consists of the representative colors and their percentages in the region. A similarity measure similar to the quadratic color histogram distance measure is defined for this descriptor. The representative colors can be indexed in the three-dimensional (3-D) color space thus avoiding the high-dimensional indexing problems associated with the traditional color histogram. For similarity retrieval, each representative color in the query image or region is used independently to find regions containing that color. The matches from all of the query colors are then combined to obtain the final retrievals. An efficient indexing scheme for fast retrieval is presented. Experimental results show that this compact descriptor is effective and compares favorably with the traditional color histogram in terms of overall computational complexity.


Frontiers in Plant Science | 2011

The iPlant Collaborative: Cyberinfrastructure for Plant Biology

Stephen A. Goff; Matthew W. Vaughn; Sheldon J. McKay; Eric Lyons; Ann E. Stapleton; Damian Gessler; Naim Matasci; Liya Wang; Matthew R. Hanlon; Andrew Lenards; Andy Muir; Nirav Merchant; Sonya Lowry; Stephen A. Mock; Matthew Helmke; Adam Kubach; Martha L. Narro; Nicole Hopkins; David Micklos; Uwe Hilgert; Michael Gonzales; Chris Jordan; Edwin Skidmore; Rion Dooley; John Cazes; Robert T. McLay; Zhenyuan Lu; Shiran Pasternak; Lars Koesterke; William H. Piel

The iPlant Collaborative (iPlant) is a United States National Science Foundation (NSF) funded project that aims to create an innovative, comprehensive, and foundational cyberinfrastructure in support of plant biology research (PSCIC, 2006). iPlant is developing cyberinfrastructure that uniquely enables scientists throughout the diverse fields that comprise plant biology to address Grand Challenges in new ways, to stimulate and facilitate cross-disciplinary research, to promote biology and computer science research interactions, and to train the next generation of scientists on the use of cyberinfrastructure in research and education. Meeting humanitys projected demands for agricultural and forest products and the expectation that natural ecosystems be managed sustainably will require synergies from the application of information technologies. The iPlant cyberinfrastructure design is based on an unprecedented period of research community input, and leverages developments in high-performance computing, data storage, and cyberinfrastructure for the physical sciences. iPlant is an open-source project with application programming interfaces that allow the community to extend the infrastructure to meet its needs. iPlant is sponsoring community-driven workshops addressing specific scientific questions via analysis tool integration and hypothesis testing. These workshops teach researchers how to add bioinformatics tools and/or datasets into the iPlant cyberinfrastructure enabling plant scientists to perform complex analyses on large datasets without the need to master the command-line or high-performance computational services.


computer vision and pattern recognition | 1996

Texture features and learning similarity

Wei-Ying Ma; B. S. Manjunath

This paper addresses two important issues related to texture pattern retrieval: feature extraction and similarity search. A Gabor feature representation for textured images is proposed, and its performance in pattern retrieval is evaluated on a large texture image database. These features compare favorably with other existing texture representations. A simple hybrid neural network algorithm is used to learn the similarity by simple clustering in the texture feature space. With learning similarity the performance of similar pattern retrieval improves significantly. An important aspect of this work is its application to real image data. Texture feature extraction with similarity learning is used to search through large aerial photographs. Feature clustering enables efficient search of the database as our experimental results indicate.


international symposium on computer vision | 1995

An eigenspace update algorithm for image analysis

B. S. Manjunath; Shiv Chandrasekaran; Yuan-Fang Wang

During the past few years several interesting applications of eigenspace representation of images have been proposed. These include face recognition, video coding, pose estimation, etc. However, the vision research community has largely overlooked parallel developments in signal processing and numerical linear algebra concerning efficient eigenspace updating algorithms. These new developments are significant for two reasons: adopting them makes some of the current vision algorithms more robust and efficient. More important is the fact that incremental updating of eigenspace representations opens up new and interesting research applications in vision such as active recognition and learning. The main objective of the paper is to put these in perspective and discuss a recently introduced updating scheme that has been shown to be numerically stable and optimal. We provide an example of one particular application to 3D object representation projections and give an error analysis of the algorithm. Preliminary experimental results are shown.

Collaboration


Dive into the B. S. Manjunath's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Baris Sumengen

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kenneth Rose

University of California

View shared research outputs
Top Co-Authors

Avatar

Anindya Sarkar

University of California

View shared research outputs
Top Co-Authors

Avatar

S. Karthikeyan

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pratim Ghosh

University of California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge