Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rahul Swaminathan is active.

Publication


Featured researches published by Rahul Swaminathan.


Computer Vision and Image Understanding | 2005

Extracting layers and analyzing their specular properties using epipolar-plane-image analysis

Antonio Criminisi; Sing Bing Kang; Rahul Swaminathan; Richard Szeliski; P. Anandan

Despite progress in stereo reconstruction and structure from motion, 3D scene reconstruction from multiple images still faces many difficulties, especially in dealing with occlusions, partial visibility, textureless regions, and specular reflections. Moreover, the problem of recovering a spatially dense 3D representation from many views has not been adequately treated. This document addresses the problems of achieving a dense reconstruction from a sequence of images and analyzing and removing specular highlights. The first part describes an approach for automatically decomposing the scene into a set of spatio-temporal layers (namely EPI-tubes) by analyzing the epipolar plane image (EPI) volume. The key to our approach is to directly exploit the high degree of regularity found in the EPI volume. In contrast to past work on EPI volumes that focused on a sparse set of feature tracks, we develop a complete and dense segmentation of the EPI volume. Two different algorithms are presented to segment the input EPI volume into its component EPI tubes. The second part describes a mathematical characterization of specular reflections within the EPI framework and proposes a novel technique for decomposing a static scene into its diffuse (Lambertian) and specular components. Furthermore, a taxonomy of specularities based on their photometric properties is presented as a guide for designing further separation techniques. The validity of our approach is demonstrated on a number of sequences of complex scenes with large amounts of occlusions and specularity. In particular, we demonstrate object removal and insertion, depth map estimation, and detection and removal of specular highlights.


international conference on computer vision | 2001

Caustics of catadioptric cameras

Rahul Swaminathan; Michael D. Grossberg; Shree K. Nayar

Conventional vision systems and algorithms assume the camera to have a single viewpoint. However, sensors need not always maintain a single viewpoint. For instance, an incorrectly aligned system could cause non-single viewpoints. Also, systems could be designed to specifically deviate from a single viewpoint to trade-off image characteristics such as resolution and field of view. In these cases, the locus of viewpoints forms what is called a caustic. In this paper, we present an in-depth analysis of caustics of catadioptric cameras with conic reflectors. Properties of caustics with respect to field of view and resolution are presented. Finally, we present ways to calibrate conic catadioptric systems and estimate their caustics from known camera motion.


International Journal of Computer Vision | 2006

Non-Single Viewpoint Catadioptric Cameras: Geometry and Analysis

Rahul Swaminathan; Michael D. Grossberg; Shree K. Nayar

Abstract.Conventional vision systems and algorithms assume the imaging system to have a single viewpoint. However, these imaging systems need not always maintain a single viewpoint. For instance, an incorrectly aligned catadioptric system could cause non-single viewpoints. Moreover, a lot of flexibility in imaging system design can be achieved by relaxing the need for imaging systems to have a single viewpoint. Thus, imaging systems with non-single viewpoints can be designed for specific imaging tasks, or image characteristics such as field of view and resolution. The viewpoint locus of such imaging systems is called a caustic.In this paper, we present an in-depth analysis of caustics of catadioptric cameras with conic reflectors. We use a simple parametric model for both, the reflector and the imaging system, to derive an analytic solution for the caustic surface. This model completely describes the imaging system and provides a map from pixels in the image to their corresponding viewpoints and viewing direction. We use the model to analyze the imaging systems properties such as field of view, resolution and other geometric properties of the caustic itself. In addition, we present a simple technique to calibrate the class of conic catadioptric cameras and estimate their caustics from known camera motion. The analysis and results we present in this paper are general and can be applied to any catadioptric imaging system whose reflector has a parametric form.


european conference on computer vision | 2002

On the Motion and Appearance of Specularities in Image Sequences

Rahul Swaminathan; Sing Bing Kang; Richard Szeliski; Antonio Criminisi; Shree K. Nayar

Real scenes are full of specularities (highlights and reflections), and yet most vision algorithms ignore them. In order to capture the appearance of realistic scenes, we need to model specularities as separate layers. In this paper, we study the behavior of specularities in static scenes as the camera moves, and describe their dependence on varying surface geometry, orientation, and scene point and camera locations. For a rectilinear camera motion with constant velocity, we study how the specular motion deviates from a straight trajectory (disparity deviation) and how much it violates the epipolar constraint (epipolar deviation). Surprisingly, for surfaces that are convex or not highly undulating, these deviations are usually quite small. We also study the appearance of specularities, i.e., how they interact with the body reflection, and with the usual occlusion ordering constraints applicable to diffuse opaque layers. We present a taxonomy of specularities based on their photometric properties as a guide for designing separation techniques. Finally, we propose a technique to extract specularities as a separate layer, and demonstrate it using an image sequence of a complex scene.


Archive | 1999

Polycameras: Camera Clusters for Wide Angle Imaging

Rahul Swaminathan; Shree K. Nayar

Abstract : We present the idea of a polycamera which is defined as a tightly packed camera cluster. The cluster is arranged so as to minimize the overlap between adjacent views. The objective of such clusters is to be able to image a very large field of view without loss of resolution. Since these clusters do not have a single viewpoint, analysis is provided on the effects of such non-singularities. We also present certain configurations for polycameras which cover varying fields of view. We would like to minimize the number of sensors required to capture a given field of view. Therefore we recommend the use of wide-angle sensors as opposed to traditional long focal length sensors. However, such wide-angle sensors tend to have severe distortions which pull points towards the optical center. This paper also proposes a method for recovering the distortion parameters without the use of any calibration objects. Since distortions cause straight lines in the scene to appear as curves in the image, our algorithm seeks to find the distortions parameters that would map the image curves to straight lines. The user selects a small set of points along the image curves. Recovery of the distortion parameters is formulated as the minimization of an objective function which is designed to explicitly account for noise in the selected image points. Experimental results are presented for synthetic data with different noise levels as well as for real images. Once calibrated, the image stream from a wide angle camera can be undistorted in real time using look up tables. Finally, we apply our distortion correction technique to a polycamera made of four wide-angle cameras to create a high resolution 360 degree panorama in real-time.


workshop on applications of computer vision | 1998

Catadioptric video sensors

Shree K. Nayar; Joshua Gluckman; Rahul Swaminathan; Simon Lok; Terrance E. Boult

Conventional video cameras have limited fields of view which make them restrictive in a variety of applications. A catadioptric sensor uses a combination of lenses and mirrors placed in a carefully arranged configuration to capture a much wider field of view. At Columbia University, we have developed a wide range of catadioptric sensors. Some of these sensors have been designed to produce unusually large fields of view. Others have been constructed for the purpose of depth computation. All our sensors perform in real time using just a PC.


Archive | 2003

A General Framework for Designing Catadioptric Imaging and Projection Systems

Rahul Swaminathan; Michael D. Grossberg; Shree K. Nayar

Abstract : New vision applications have been made possible and old ones improved through the creation and design of novel catadioptric systems. Critical to the design of catadioptric imaging is determining the shape of one or more mirrors in the system. Almost all the previously designed mirrors for catadioptric systems used case specific tools and considerable effort on the part of the designer. Recently some new general methods have been proposed to automate the design process. However, all the methods presented so far determine the mirror shape by optimizing its geometric properties, such as surface normal orientations. A more principled approach is to determine a mirror that reduces image errors. In this paper we present a method for finding mirror shapes which meet user determined specifications while minimizing image error. We accomplish this by deriving a first order approximation of the image error. This permits us to compute the mirror shape using a linear approach that provides good results efficiently while avoiding the numerical problems associated with non-linear optimization. Since the design of mirrors can also be applied to projection systems, we also provide a method to approximate projection errors in the scene. We demonstrate our approach on various catadioptric systems and shoe our approach to provide much more accurate imaging characteristics. In some cases we achieved reduction in image error up to 80 percent.


Archive | 1998

Combined wide angle and narrow angle imaging system and method for surveillance and monitoring

Shree K. Nayar; Rahul Swaminathan; Joshua Gluckman


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2000

Nonmetric calibration of wide-angle lenses and polycameras

Rahul Swaminathan; Shree K. Nayar


Archive | 2004

Designing mirrors for catadioptric systems that minimize image errors

Rahul Swaminathan; Shree K. Nayar; Michael D. Grossberg

Collaboration


Dive into the Rahul Swaminathan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Terrance E. Boult

University of Colorado Colorado Springs

View shared research outputs
Researchain Logo
Decentralizing Knowledge