Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Radhakrishna Achanta is active.

Publication


Featured researches published by Radhakrishna Achanta.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2012

SLIC Superpixels Compared to State-of-the-Art Superpixel Methods

Radhakrishna Achanta; A. Shaji; Kevin Smith; Aurelien Lucchi; Pascal Fua; Sabine Süsstrunk

Computer vision applications have come to rely increasingly on superpixels in recent years, but it is not always clear what constitutes a good superpixel algorithm. In an effort to understand the benefits and drawbacks of existing methods, we empirically compare five state-of-the-art superpixel algorithms for their ability to adhere to image boundaries, speed, memory efficiency, and their impact on segmentation performance. We then introduce a new superpixel algorithm, simple linear iterative clustering (SLIC), which adapts a k-means clustering approach to efficiently generate superpixels. Despite its simplicity, SLIC adheres to boundaries as well as or better than previous methods. At the same time, it is faster and more memory efficient, improves segmentation performance, and is straightforward to extend to supervoxel generation.


computer vision and pattern recognition | 2009

Frequency-tuned salient region detection

Radhakrishna Achanta; Sheila S. Hemami; Francisco J. Estrada; Sabine Süsstrunk

Detection of visually salient image regions is useful for applications like object segmentation, adaptive compression, and object recognition. In this paper, we introduce a method for salient region detection that outputs full resolution saliency maps with well-defined boundaries of salient objects. These boundaries are preserved by retaining substantially more frequency content from the original image than other existing techniques. Our method exploits features of color and luminance, is simple to implement, and is computationally efficient. We compare our algorithm to five state-of-the-art salient region detection methods with a frequency domain analysis, ground truth, and a salient object segmentation application. Our method outperforms the five algorithms both on the ground-truth evaluation and on the segmentation task by achieving both higher precision and better recall.


international conference on computer vision systems | 2008

Salient region detection and segmentation

Radhakrishna Achanta; Francisco J. Estrada; Patricia Wils; Sabine Süsstrunk

Detection of salient image regions is useful for applications like image segmentation, adaptive compression, and region-based image retrieval. In this paper we present a novel method to determine salient regions in images using low-level features of luminance and color. The method is fast, easy to implement and generates high quality saliency maps of the same size and resolution as the input image. We demonstrate the use of the algorithm in the segmentation of semantically meaningful whole objects from digital images.


international conference on image processing | 2010

Saliency detection using maximum symmetric surround

Radhakrishna Achanta; Sabine Süsstrunk

Detection of visually salient image regions is useful for applications like object segmentation, adaptive compression, and object recognition. Recently, full-resolution salient maps that retain well-defined boundaries have attracted attention. In these maps, boundaries are preserved by retaining substantially more frequency content from the original image than older techniques. However, if the salient regions comprise more than half the pixels of the image, or if the background is complex, the background gets highlighted instead of the salient object. In this paper, we introduce a method for salient region detection that retains the advantages of such saliency maps while overcoming their shortcomings. Our method exploits features of color and luminance, is simple to implement and is computationally efficient. We compare our algorithm to six state-of-the-art salient region detection methods using publicly available ground truth. Our method outperforms the six algorithms by achieving both higher precision and better recall. We also show application of our saliency maps in an automatic salient object segmentation scheme using graph-cuts.


international conference on image processing | 2009

Saliency detection for content-aware image resizing

Radhakrishna Achanta; Sabine Süsstrunk

Content aware image re-targeting methods aim to arbitrarily change image aspect ratios while preserving visually prominent features. To determine visual importance of pixels, existing re-targeting schemes mostly rely on grayscale intensity gradient maps. These maps show higher energy only at edges of objects, are sensitive to noise, and may result in deforming salient objects. In this paper, we present a computationally efficient, noise robust re-targeting scheme based on seam carving by using saliency maps that assign higher importance to visually prominent whole regions (and not just edges). This is achieved by computing global saliency of pixels using intensity as well as color features. Our saliency maps easily avoid artifacts that conventional seam carving generates and are more robust in the presence of noise. Also, unlike gradient maps, which may have to be recomputed several times during a seam carving based re-targeting operation, our saliency maps are computed only once independent of the number of seams added or removed.


IEEE Transactions on Medical Imaging | 2012

Supervoxel-Based Segmentation of Mitochondria in EM Image Stacks With Learned Shape Features

Aurelien Lucchi; Kevin Smith; Radhakrishna Achanta; Graham Knott; Pascal Fua

It is becoming increasingly clear that mitochondria play an important role in neural function. Recent studies show mitochondrial morphology to be crucial to cellular physiology and synaptic function and a link between mitochondrial defects and neuro-degenerative diseases is strongly suspected. Electron microscopy (EM), with its very high resolution in all three directions, is one of the key tools to look more closely into these issues but the huge amounts of data it produces make automated analysis necessary. State-of-the-art computer vision algorithms designed to operate on natural 2-D images tend to perform poorly when applied to EM data for a number of reasons. First, the sheer size of a typical EM volume renders most modern segmentation schemes intractable. Furthermore, most approaches ignore important shape cues, relying only on local statistics that easily become confused when confronted with noise and textures inherent in the data. Finally, the conventional assumption that strong image gradients always correspond to object boundaries is violated by the clutter of distracting membranes. In this work, we propose an automated graph partitioning scheme that addresses these issues. It reduces the computational complexity by operating on supervoxels instead of voxels, incorporates shape features capable of describing the 3-D shape of the target objects, and learns to recognize the distinctive appearance of true boundaries. Our experiments demonstrate that our approach is able to segment mitochondria at a performance level close to that of a human annotator, and outperforms a state-of-the-art 3-D segmentation technique.


medical image computing and computer assisted intervention | 2010

A fully automated approach to segmentation of irregularly shaped cellular structures in EM images

Aurelien Lucchi; Kevin Smith; Radhakrishna Achanta; Vincent Lepetit; Pascal Fua

While there has been substantial progress in segmenting natural images, state-of-the-art methods that perform well in such tasks unfortunately tend to underperform when confronted with the different challenges posed by electron microscope (EM) data. For example, in EM imagery of neural tissue, numerous cells and subcellular structures appear within a single image, they exhibit irregular shapes that cannot be easily modeled by standard techniques, and confusing textures clutter the background. We propose a fully automated approach that handles these challenges by using sophisticated cues that capture global shape and texture information, and by learning the specific appearance of object boundaries. We demonstrate that our approach significantly outperforms state-of-the-art techniques and closely matches the performance of human annotators.


international conference on multimedia and expo | 2002

Compressed domain object tracking for automatic indexing of objects in MPEG home video

Radhakrishna Achanta; Mohan S. Kankanhalli; Philippe Mulhem

Object tracking is of utmost importance for automatic indexing of video content. This work presents an object tracker that operates directly on MPEG compressed data. Motion vectors and discrete cosine transform (DCT) coefficients directly available from the compressed video stream are exploited for the purpose of tracking. Tracking proceeds in two steps: motion vector based tracking in P and B frames within the groups of pictures (GOPs), and object identification in I frames. Colour, which is one of the strongest cues for tracking is used for the identification step. Such a system offers speed, simplicity and robustness against occlusion and camera motion, with good intra-shot tracking for shots in excess of 500 frames, as shown in the experimental results.


IEEE MultiMedia | 2006

Modeling intent for home video repurposing

Radhakrishna Achanta; Wei-Qi Yan; Mohan S. Kankanhalli

Amateur home videos rarely convey intent effectively, primarily because of the limitations of conventional consumer-quality video cameras and the difficulties of video postprocessing. The authors describe a general approach for video-intent delivery based on offline cinematography and automated continuity editing concepts and demonstrate its use with four basic emotions: cheer, serenity, gloom, and excitement.


international conference on acoustics, speech, and signal processing | 2003

A hierarchical framework for face tracking using state vector fusion for compressed video

Jun Wang; Radhakrishna Achanta; Mohan S. Kankanhalli; Philippe Mulhem

Faces usually are the most interesting objects in certain categories of video, like home videos and news clips. A novel sensor fusion based face tracking system is presented that tracks faces in compressed video, and aids automatic video indexing. Tracking is done by fusing the measurements from three independent sensors - motion and colour based trackers (Achanta, R. et al., IEEE Int. Conf. on Multimedia and Expo, 2002) and a face detector (Wang, J. et al., Proc. Int. Workshop on Advanced Image Technology, 2002) using a novel hierarchical framework based on Kalman filter state vector fusion. The tracking results show that the fused results are better than those of any individual sensors or their mean.

Collaboration


Dive into the Radhakrishna Achanta's collaboration.

Top Co-Authors

Avatar

Sabine Süsstrunk

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Pascal Fua

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kevin Smith

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Mohan S. Kankanhalli

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Jun Wang

University College London

View shared research outputs
Top Co-Authors

Avatar

Graham Knott

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Nikolaos Arvanitopoulos

Aristotle University of Thessaloniki

View shared research outputs
Top Co-Authors

Avatar

Kankanhalli

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Philippe Mulhem

National University of Singapore

View shared research outputs
Researchain Logo
Decentralizing Knowledge