Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chandramouli Paramanand is active.

Publication


Featured researches published by Chandramouli Paramanand.


computer vision and pattern recognition | 2013

Non-uniform Motion Deblurring for Bilayer Scenes

Chandramouli Paramanand; A. N. Rajagopalan

We address the problem of estimating the latent image of a static bilayer scene (consisting of a foreground and a background at different depths) from motion blurred observations captured with a handheld camera. The camera motion is considered to be composed of in-plane rotations and translations. Since the blur at an image location depends both on camera motion and depth, deblurring becomes a difficult task. We initially propose a method to estimate the transformation spread function (TSF) corresponding to one of the depth layers. The estimated TSF (which reveals the camera motion during exposure) is used to segment the scene into the foreground and background layers and determine the relative depth value. The deblurred image of the scene is finally estimated within a regularization framework by accounting for blur variations due to camera motion as well as depth.


IEEE Transactions on Image Processing | 2012

Depth From Motion and Optical Blur With an Unscented Kalman Filter

Chandramouli Paramanand; A. N. Rajagopalan

Space-variantly blurred images of a scene contain valuable depth information. In this paper, our objective is to recover the 3-D structure of a scene from motion blur/optical defocus. In the proposed approach, the difference of blur between two observations is used as a cue for recovering depth, within a recursive state estimation framework. For motion blur, we use an unblurred-blurred image pair. Since the relationship between the observation and the scale factor of the point spread function associated with the depth at a point is nonlinear, we propose and develop a formulation of unscented Kalman filter for depth estimation. There are no restrictions on the shape of the blur kernel. Furthermore, within the same formulation, we address a special and challenging scenario of depth from defocus with translational jitter. The effectiveness of our approach is evaluated on synthetic as well as real data, and its performance is also compared with contemporary techniques.


british machine vision conference | 2010

Inferring Image Transformation and Structure from Motion-Blurred Images.

Chandramouli Paramanand; A. N. Rajagopalan

This paper deals with the problem of estimating structure of 3D scenes and image transformations from observations that are blurred due to unconstrained camera motion. Initially, we consider a fronto-parallel planar scene and relate the reference image of the scene to its motion-blurred observation by finding the reference image transformations. The blur kernel at every image point can be determined from these transformations. For 3D scenes, the extent of blurring in the image is related to the camera motion as well as the scene structure. We propose a technique to estimate the scene depth with the knowledge of the estimated image transformations. The proposed method is validated by testing on real and synthetic experiments.


International Journal of Computer Vision | 2014

Shape from Sharp and Motion-Blurred Image Pair

Chandramouli Paramanand; A. N. Rajagopalan

Motion blur due to camera shake is a common occurrence. During image capture, the apparent motion of a scene point in the image plane varies according to both camera motion and scene structure. Our objective is to infer the camera motion and the depth map of static scenes using motion blur as a cue. To this end, we use an unblurred–blurred image pair. Initially, we develop a technique to estimate the transformation spread function (TSF) which symbolizes the camera shake. This technique uses blur kernels estimated at different points across the image. Based on the estimated TSF, we recover the complete depth map of the scene within a regularization framework.


IEEE Transactions on Image Processing | 2013

Non-Uniform Deblurring in HDR Image Reconstruction

Channarayapatna Shivaram Vijay; Chandramouli Paramanand; A. N. Rajagopalan; Rama Chellappa

Hand-held cameras inevitably result in blurred images caused by camera-shake, and even more so in high dynamic range imaging applications where multiple images are captured over a wide range of exposure settings. The degree of blurring depends on many factors such as exposure time, stability of the platform, and user experience. Camera shake involves not only translations but also rotations resulting in nonuniform blurring. In this paper, we develop a method that takes input non-uniformly blurred and differently exposed images to extract the deblurred, latent irradiance image. We use transformation spread function (TSF) to effectively model the blur caused by camera motion. We first estimate the TSFs of the blurred images from locally derived point spread functions by exploiting their linear relationship. The scene irradiance is then estimated by minimizing a suitably derived cost functional. Two important cases are investigated wherein 1) only the higher exposures are blurred and 2) all the captured frames are blurred.


computer vision and pattern recognition | 2010

Unscented transformation for depth from motion-blur in videos

Chandramouli Paramanand; A. N. Rajagopalan

In images and videos of a 3D scene, blur due to camera shake can be a source of depth information. Our objective is to find the shape of the scene from its motion-blurred observations without having to restore the original image. In this paper, we pose depth recovery as a recursive state estimation problem. We show that the relationship between the observation and the scale factor of the motion-blur kernel associated with the depth at a point is nonlinear and propose the use of the unscented Kalman filter for state estimation. The performance of the proposed method is evaluated on many examples.


Journal of The Optical Society of America A-optics Image Science and Vision | 2010

Image matching with higher-order geometric features

Chandramouli Paramanand; A. N. Rajagopalan

We propose a geometric matching technique in which line segments and elliptical arcs are used as edge features. The use of these higher-order features renders feature representation efficient. We derive distance measures to evaluate the similarity between the features of the model and those of the image. The model transformation parameters are found by searching a 3-D transformation space using cell-decomposition. The performance of the proposed method is quite good when tested on a variety of images.


international conference on pattern recognition | 2008

Efficient geometric matching with higher-order features

Chandramouli Paramanand; A. N. Rajagopalan

We propose a new technique in which line segments and elliptical arcs are used as features for recognizing image patterns. By using this approach, the process of locating a model in a given image is efficient since the number of features to be compared is few. We propose distance measures to evaluate the similarity between the features of the model and that of the image. The model transformation parameters are found by searching the transformation space using cell decomposition.


international conference on image processing | 2013

Motion blur for motion segmentation

Chandramouli Paramanand; A. N. Rajagopalan


Visual Information Engineering, 2006. VIE 2006. IET International Conference on | 2006

An efficient representation of digital curves with line segments and elliptical arcs

Chandramouli Paramanand; A. N. Rajagopalan

Collaboration


Dive into the Chandramouli Paramanand's collaboration.

Top Co-Authors

Avatar

A. N. Rajagopalan

Indian Institute of Technology Madras

View shared research outputs
Researchain Logo
Decentralizing Knowledge