Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rahul Raguram is active.

Publication


Featured researches published by Rahul Raguram.


european conference on computer vision | 2008

A Comparative Analysis of RANSAC Techniques Leading to Adaptive Real-Time Random Sample Consensus

Rahul Raguram; Jan Michael Frahm; Marc Pollefeys

The Random Sample Consensus (RANSAC) algorithm is a popular tool for robust estimation problems in computer vision, primarily due to its ability to tolerate a tremendous fraction of outliers. There have been a number of recent efforts that aim to increase the efficiency of the standard RANSAC algorithm. Relatively fewer efforts, however, have been directed towards formulating RANSAC in a manner that is suitable for real-time implementation. The contributions of this work are two-fold: First, we provide a comparative analysis of the state-of-the-art RANSAC algorithms and categorize the various approaches. Second, we develop a powerful new framework for real-time robust estimation. The technique we develop is capable of efficiently adapting to the constraints presented by a fixed time budget, while at the same time providing accurate estimation over a wide range of inlier ratios. The method shows significant improvements in accuracy and speed over existing techniques.


International Journal of Computer Vision | 2011

Modeling and Recognition of Landmark Image Collections Using Iconic Scene Graphs

Rahul Raguram; Changchang Wu; Jan Michael Frahm; Svetlana Lazebnik

This article presents an approach for modeling landmarks based on large-scale, heavily contaminated image collections gathered from the Internet. Our system efficiently combines 2D appearance and 3D geometric constraints to extract scene summaries and construct 3D models. In the first stage of processing, images are clustered based on low-dimensional global appearance descriptors, and the clusters are refined using 3D geometric constraints. Each valid cluster is represented by a single iconic view, and the geometric relationships between iconic views are captured by an iconic scene graph. Using structure from motion techniques, the system then registers the iconic images to efficiently produce 3D models of the different aspects of the landmark. To improve coverage of the scene, these 3D models are subsequently extended using additional, non-iconic views. We also demonstrate the use of iconic images for recognition and browsing. Our experimental results demonstrate the ability to process datasets containing up to 46,000 images in less than 20 hours, using a single commodity PC equipped with a graphics card. This is a significant advance towards Internet-scale operation.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2013

USAC: A Universal Framework for Random Sample Consensus

Rahul Raguram; Ondrej Chum; Marc Pollefeys; Jiri Matas; Jan Michael Frahm

A computational problem that arises frequently in computer vision is that of estimating the parameters of a model from data that have been contaminated by noise and outliers. More generally, any practical system that seeks to estimate quantities from noisy data measurements must have at its core some means of dealing with data contamination. The random sample consensus (RANSAC) algorithm is one of the most popular tools for robust estimation. Recent years have seen an explosion of activity in this area, leading to the development of a number of techniques that improve upon the efficiency and robustness of the basic RANSAC algorithm. In this paper, we present a comprehensive overview of recent research in RANSAC-based robust estimation by analyzing and comparing various approaches that have been explored over the years. We provide a common context for this analysis by introducing a new framework for robust estimation, which we call Universal RANSAC (USAC). USAC extends the simple hypothesize-and-verify structure of standard RANSAC to incorporate a number of important practical and computational considerations. In addition, we provide a general-purpose C++ software library that implements the USAC framework by leveraging state-of-the-art algorithms for the various modules. This implementation thus addresses many of the limitations of standard RANSAC within a single unified package. We benchmark the performance of the algorithm on a large collection of estimation problems. The implementation we provide can be used by researchers either as a stand-alone tool for robust estimation or as a benchmark for evaluating new techniques.


international conference on computer vision | 2011

RECON: Scale-adaptive robust estimation via Residual Consensus

Rahul Raguram; Jan Michael Frahm

In this paper, we present a novel, threshold-free robust estimation framework capable of efficiently fitting models to contaminated data. While RANSAC and its many variants have emerged as popular tools for robust estimation, their performance is largely dependent on the availability of a reasonable prior estimate of the inlier threshold. In this work, we aim to remove this threshold dependency. We build on the observation that models generated from uncontaminated minimal subsets are “consistent” in terms of the behavior of their residuals, while contaminated models exhibit uncorrelated behavior. By leveraging this observation, we then develop a very simple, yet effective algorithm that does not require apriori knowledge of either the scale of the noise, or the fraction of uncontaminated points. The resulting estimator, RECON (REsidual CONsensus), is capable of elegantly adapting to the contamination level of the data, and shows excellent performance even at low inlier ratios and high noise levels. We demonstrate the efficiency of our framework on a variety of challenging estimation problems.


computer vision and pattern recognition | 2008

Computing iconic summaries of general visual concepts

Rahul Raguram; Svetlana Lazebnik

This paper considers the problem of selecting iconic images to summarize general visual categories. We define iconic images as high-quality representatives of a large group of images consistent both in appearance and semantics. To find such groups, we perform joint clustering in the space of global image descriptors and latent topic vectors of tags associated with the images. To select the representative iconic images for the joint clusters, we use a quality ranking learned from a large collection of labeled images. For the purposes of visualization, iconic images are grouped by semantic ldquothemerdquo and multidimensional scaling is used to compute a 2D layout that reflects the relationships between the themes. Results on four large-scale datasets demonstrate the ability of our approach to discover plausible themes and recurring visual motifs for challenging abstract concepts such as ldquoloverdquo and ldquobeautyrdquo.


international conference on computer vision | 2009

Exploiting uncertainty in random sample consensus

Rahul Raguram; Jan Michael Frahm; Marc Pollefeys

In this work, we present a technique for robust estimation, which by explicitly incorporating the inherent uncertainty of the estimation procedure, results in a more efficient robust estimation algorithm. In addition, we build on recent work in randomized model verification, and use this to characterize the ‘non-randomness’ of a solution. The combination of these two strategies results in a robust estimation procedure that provides a significant speed-up over existing RANSAC techniques, while requiring no prior information to guide the sampling process. In particular, our algorithm requires, on average, 3–10 times fewer samples than standard RANSAC, which is in close agreement with theoretical predictions. The efficiency of the algorithm is demonstrated on a selection of geometric estimation problems.


british machine vision conference | 2012

Improved Geometric Verification for Large Scale Landmark Image Collections.

Rahul Raguram; Joseph Tighe; Jan Michael Frahm

In this work, we address the issue of geometric verification, with a focus on modeling large-scale landmark image collections gathered from the internet. In particular, we show that we can compute and learn descriptive statistics pertaining to the image collection by leveraging information that arises as a by-product of the matching and verification stages. Our approach is based on the intuition that validating numerous image pairs of the same geometric scene structures quickly reveals useful information about two aspects of the image collection: (a) the reliability of individual visual words and (b) the appearance of landmarks in the image collection. Both of these sources of information can then be used to drive any subsequent processing, thus allowing the system to bootstrap itself. While current techniques make use of dedicated training/preprocessing stages, our approach elegantly integrates into the standard geometric verification pipeline, by simply leveraging the information revealed during the verification stage. The main result of this work is that this unsupervised “learning-as-you-go” approach significantly improves performance; our experiments demonstrate significant improvements in efficiency and completeness over standard techniques.


IEEE Transactions on Image Processing | 2009

Improved Resolution Scalability for Bilevel Image Data in JPEG2000

Rahul Raguram; Michael W. Marcellin; Ali Bilgin

In this paper, we address issues concerning bilevel image compression using JPEG2000. While JPEG2000 is designed to compress both bilevel and continuous tone image data using a single unified framework, there exist significant limitations with respect to its use in the lossless compression of bilevel imagery. In particular, substantial degradation in image quality at low resolutions severely limits the resolution scalable features of the JPEG2000 code-stream. We examine these effects and present two efficient methods to improve resolution scalability for bilevel imagery in JPEG2000. By analyzing the sequence of rounding operations performed in the JPEG2000 lossless compression pathway, we introduce a simple pixel assignment scheme that improves image quality for commonly occurring types of bilevel imagery. Additionally, we develop a more general strategy based on the JPIP protocol, which enables efficient interactive access of compressed bilevel imagery. It may be noted that both proposed methods are fully compliant with Part 1 of the JPEG2000 standard.


international conference on 3d imaging, modeling, processing, visualization & transmission | 2011

Efficient Generation of Multi-perspective Panoramas

Enliang Zheng; Rahul Raguram; Pierre Fite-Georgel; Jan Michael Frahm

In this paper, we present an efficient technique for generating multi-perspective panoramic images of long scenes. The input to our system is a video sequence captured by a moving camera navigating through a long scene, and our goal is to efficiently generate a panoramic summary of the scene. This problem has received considerable attention in recent years, leading to the development of a number of systems capable of generating high-quality panoramas. However, a significant limitation of current systems is their computational complexity: most current techniques employ computationally expensive algorithms (such as structure-from-motion and dense stereo), or require some degree of manual interaction. In turn, this limits the scalability of the algorithms as well as their ease of implementation. In contrast, the technique we present is simple, efficient, easy to implement, and produces results of comparable quality to state of the art techniques, while doing so at a fraction of the computational cost. Our system operates entirely in the 2D image domain, performing robust image alignment and optical flow based mosaicing, in lieu of more expensive 3D pose/structure computation. We demonstrate the effectiveness of our system on a number of challenging image sequences.


british machine vision conference | 2012

Efficient and Scalable Depthmap Fusion.

Enliang Zheng; Enrique Dunn; Rahul Raguram; Jan Michael Frahm

The estimation of a complete 3D model from a set of depthmaps is a data intensive task aimed at mitigating measurement noise in the input data by leveraging the inherent redundancy in overlapping multi-view observations. In this paper we propose an efficient depthmap fusion approach that reduces the memory complexity associated with volumetric scene representations. By virtue of reducing the memory footprint we are able to process an increased reconstruction volume with greater spatial resolution. Our approach also improves upon state of the art fusion techniques by approaching the problem in an incremental online setting instead of batch mode processing. In this way, are able to handle an arbitrary number of input images at high pixel resolution and facilitate a streaming 3D processing pipeline. Experiments demonstrate the effectiveness of our proposal both at 3D modeling from internet-scale crowd source data as well as close-range 3D modeling from high resolution video streams.

Collaboration


Dive into the Rahul Raguram's collaboration.

Top Co-Authors

Avatar

Jan Michael Frahm

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Changchang Wu

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Brian Clipp

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

David Gallup

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Pierre Fite-Georgel

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Timothy A. Johnson

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrew M. White

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Enliang Zheng

University of North Carolina at Chapel Hill

View shared research outputs
Researchain Logo
Decentralizing Knowledge