Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sharath R. Cholleti is active.

Publication


Featured researches published by Sharath R. Cholleti.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2008

Localized Content-Based Image Retrieval

Rouhollah Rahmani; Sally A. Goldman; Hui Zhang; Sharath R. Cholleti; Jason E. Fritts

We define localized content-based image retrieval as a CBIR task where the user is only interested in a portion of the image, and the rest of the image is irrelevant. In this paper we present a localized CBIR system, Accio, that uses labeled images in conjunction with a multiple-instance learning algorithm to first identify the desired object and weight the features accordingly, and then to rank images in the database using a similarity measure that is based upon only the relevant portions of the image. A challenge for localized CBIR is how to represent the image to capture the content. We present and compare two novel image representations, which extend traditional segmentation-based and salient point-based techniques respectively, to capture content in a localized CBIR setting.


acm multimedia | 2006

Local image representations using pruned salient points with applications to CBIR

Hui Zhang; Rouhollah Rahmani; Sharath R. Cholleti; Sally A. Goldman

Salient points are locations in an image where there is a significant variation with respect to a chosen image feature. Since the set of salient points in an image capture important local characteristics of that image, they can form the basis of a good image representation for content-based image retrieval (CBIR). The features for a salient point should represent the local characteristic of that point so that the similarity between features indicates the similarity between the salient points. Traditional uses of salient points for CBIR assign features to a salient point based on the image features of all pixels in a window around that point. However, since salient points are often on the boundary of objects, the features assigned to a salient point often involve pixels from different objects. In this paper, we propose a CBIR system that uses a novel salient point method that both reduces the number of salient points using a segmentation as a filter, and also improves the representation so that it is a more faithful representation of a single object (or portion of an object) that includes information about its surroundings. We also introduce an improved Expectation Maximization-Diverse Density (EM-DD) based multiple-instance learning algorithm. Experimental results show that our CBIR techniques improve retrieval performance by 5%-11% as compared with current methods.


languages, compilers, and tools for embedded systems | 2005

Upper bound for defragmenting buddy heaps

Delvin Defoe; Sharath R. Cholleti; Ron K. Cytron

Knuths buddy system is an attractive algorithm for managing storage allocation, and it can be made to operate in real-time. At some point, storage-management systems must either over-provide storage or else confront the issue of defragmentation. Because storage conservation is important to embedded systems, we investigate the issue of defragmentation for heaps that are managed by the buddy system. In this paper, we present tight bounds for the amount of storage necessary to avoid defragmentation. These bounds turn out to be too high for embedded systems, so defragmentation becomes necessary.We then present an algorithm for defragmenting buddy heaps and present experiments from applying that algorithm to real and synthetic benchmarks. Our algorithm relocates less than twice the space relocated by an optimal algorithm to defragment the heap so as to respond to a single allocation request. Our experiments show our algorithm to be much more efficient than extant defragmentation algorithms.


computer vision and pattern recognition | 2006

Meta-Evaluation of Image Segmentation Using Machine Learning

Hui Zhang; Sharath R. Cholleti; Sally A. Goldman; Jason E. Fritts

Image segmentation is a fundamental step in many computer vision applications. Generally, the choice of a segmentation algorithm, or parameterization of a given algorithm, is selected at the application level and fixed for all images within that application. Our goal is to create a stand-alone method to evaluate segmentation quality. Stand-alone methods have the advantage that they do not require a manually-segmented reference image for comparison, and can therefore be used for real-time evaluation. Current stand-alone evaluation methods often work well for some types of images, but poorly for others. We propose a meta-evaluation method in which any set of base evaluation methods are combined by a machine learning algorithm that coalesces their evaluations based on a learned weighting function, which depends upon the image to be segmented. The training data used by the machine learning algorithm can be labeled by a human, based on similarity to a human-generated reference segmentation, or based upon system-level performance. Experimental results demonstrate that our method performs better than the existing stand-alone segmentation evaluation methods.


International Journal on Artificial Intelligence Tools | 2009

VERITAS: COMBINING EXPERT OPINIONS WITHOUT LABELED DATA

Sharath R. Cholleti; Sally A. Goldman; Avrim Blum; David G. Politte; Steven Don; Kirk E. Smith; Fred W. Prior

We consider a variation of the problem of combining expert opinions for the situation in which there is no ground truth to use for training. Even though we do not have labeled data, the goal of this work is quite different from an unsupervised learning problem in which the goal is to cluster the data. Our work is motivated by the application of segmenting a lung nodule in a computed tomography (CT) scan of the human chest. The lack of a gold standard of truth is a critical problem in medical imaging. A variety of experts, both human and computer algorithms, are available that can mark which voxels are part of a nodule. The question is, how to combine these expert opinions to estimate the unknown ground truth. We present the Veritas algorithm that predicts the underlying label using the knowledge in the expert opinions even without the benefit of any labeled data for training. We evaluate Veritas using artificial data and real CT images to which synthetic nodules have been added, providing a known ground truth.


international conference on tools with artificial intelligence | 2006

MI-Winnow: A New Multiple-Instance Learning Algorithm

Sharath R. Cholleti; Sally A. Goldman; Rouhollah Rahmani

We present Mi-Winnow, a new multiple-instance learning (MIL) algorithm that provides a new technique to convert MIL data into standard supervised data. In MIL each example is a collection (or bag) of d-dimensional points where each dimension corresponds to a feature. A label is provided for the bag, but not for the individual points within the bag. Mi-Winnow is different from existing multiple-instance learning algorithms in several key ways. First, Mi-Winnow allows each image to be converted into a bag in multiple ways to create training (and test) data that varies in both the number of dimensions per point, and in the kind of features used. Second, instead of learning a concept defined by a single point-and-scaling hypothesis, Mi-Winnow allows the underlying concept to be described by combining a set of separators learned by Winnow. For content-based image retrieval applications, such a generalized hypothesis is important since there may be different ways to recognize which images are of interest


Archive | 2004

Heap Defragmentation in Bounded Time

Sharath R. Cholleti; Delvin Defoe; Ron K. Cytron

Knuth’s buddy system is an attractive algorithm for managing storage allocation, and it can be made to operate in real time. However, the issue of defragmentation for heaps that are managed by the buddy system has not been studied. In this paper, we present strong bounds on the amount of storage necessary to avoid defragmentation. We then present an algorithm for defragmenting buddy heaps and present experiments from applying that algorithm to real and synthetic benchmarks. Our algorithm is within a factor of two of optimal in terms of the time required to defragment the heap so as to respond to a single allocation request. Our experiments show our algorithm to be much more efficient than extant defragmentation algorithms.


Archive | 2004

Intelligent data storage and processing using FPGA devices

Roger D. Chamberlain; Mark A. Franklin; Ronald S. Indeck; Ron K. Cytron; Sharath R. Cholleti


international conference on tools with artificial intelligence | 2008

Veritas: Combining Expert Opinions without Labeled Data

Sharath R. Cholleti; Sally A. Goldman; Avrim Blum; David G. Politte; Steven Don


Archive | 2008

Learning from images by integrating different perspectives

Sally A. Goldman; Sharath R. Cholleti

Collaboration


Dive into the Sharath R. Cholleti's collaboration.

Top Co-Authors

Avatar

Ron K. Cytron

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar

Sally A. Goldman

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar

Mark A. Franklin

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar

Roger D. Chamberlain

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar

Ronald S. Indeck

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar

Hui Zhang

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar

Rouhollah Rahmani

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar

Avrim Blum

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

David G. Politte

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar

Delvin Defoe

Rose-Hulman Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge