Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Deepu Rajan is active.

Publication


Featured researches published by Deepu Rajan.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2003

Simultaneous estimation of super-resolved scene and depth map from low resolution defocused observations

Deepu Rajan; Subhasis Chaudhuri

This paper presents a novel technique to simultaneously estimate the depth map and the focused image of a scene, both at a super-resolution, from its defocused observations. Super-resolution refers to the generation of high spatial resolution images from a sequence of low resolution images. Hitherto, the super-resolution technique has been restricted mostly to the intensity domain. In this paper, we extend the scope of super-resolution imaging to acquire depth estimates at high spatial resolution simultaneously. Given a sequence of low resolution, blurred, and noisy observations of a static scene, the problem is to generate a dense depth map at a resolution higher than one that can be generated from the observations as well as to estimate the true high resolution focused image. Both the depth and the image are modeled as separate Markov random fields (MRF) and a maximum a posteriori estimation method is used to recover the high resolution fields. Since there is no relative motion between the scene and the camera, as is the case with most of the super-resolution and structure recovery techniques, we do away with the correspondence problem.


IEEE Transactions on Image Processing | 2010

Random Walks on Graphs for Salient Object Detection in Images

Viswanath Gopalakrishnan; Yiqun Hu; Deepu Rajan

We formulate the problem of salient object detection in images as an automatic labeling problem on the vertices of a weighted graph. The seed (labeled) nodes are first detected using Markov random walks performed on two different graphs that represent the image. While the global properties of the image are computed from the random walk on a complete graph, the local properties are computed from a sparse k-regular graph. The most salient node is selected as the one which is globally most isolated but falls on a locally compact object. A few background nodes and salient nodes are further identified based upon the random walk based hitting time to the most salient node. The salient nodes and the background nodes will constitute the labeled nodes. A new graph representation of the image that represents the saliency between nodes more accurately, the “pop-out graph” model, is computed further based upon the knowledge of the labeled salient and background nodes. A semisupervised learning technique is used to determine the labels of the unlabeled nodes by optimizing a smoothness objective label function on the newly created “pop-out graph” model along with some weighted soft constraints on the labeled nodes.


IEEE Transactions on Multimedia | 2009

Salient Region Detection by Modeling Distributions of Color and Orientation

Viswanath Gopalakrishnan; Yiqun Hu; Deepu Rajan

We present a robust salient region detection framework based on the color and orientation distribution in images. The proposed framework consists of a color saliency framework and an orientation saliency framework. The color saliency framework detects salient regions based on the spatial distribution of the component colors in the image space and their remoteness in the color space. The dominant hues in the image are used to initialize an expectation-maximization (EM) algorithm to fit a Gaussian mixture model in the hue-saturation (H-S) space. The mixture of Gaussians framework in H-S space is used to compute the inter-cluster distance in the H-S domain as well as the relative spread among the corresponding colors in the spatial domain. Orientation saliency framework detects salient regions in images based on the global and local behavior of different orientations in the image. The oriented spectral information from the Fourier transform of the local patches in the image is used to obtain the local orientation histogram of the image. Salient regions are further detected by identifying spatially confined orientations and with the local patches that possess high orientation entropy contrast. The final saliency map is selected as either color saliency map or orientation saliency map by automatically identifying which of the maps leads to the correct identification of the salient region. The experiments are carried out on a large image database annotated with ldquoground-truthrdquo salient regions, provided by Microsoft Research Asia, which enables us to conduct robust objective level comparisons with other salient region detection algorithms.


advances in multimedia | 2004

Salient region detection using weighted feature maps based on the human visual attention model

Yiqun Hu; Xing Xie; Wei-Ying Ma; Liang-Tien Chia; Deepu Rajan

Detection of salient regions in images is useful for object based image retrieval and browsing applications. This task can be done using methods based on the human visual attention model [1], where feature maps corresponding to color, intensity and orientation capture the corresponding salient regions. In this paper, we propose a strategy for combining the salient regions from the individual feature maps based on a new Composite Saliency Indicator (CSI) which measures the contribution of each feature map to saliency. The method also carries out a dynamic weighting of individual feature maps. The experiment results indicate that this combination strategy reflects the salient regions in an image more accurately.


IEEE Signal Processing Magazine | 2003

Multi-objective super resolution: concepts and examples

Deepu Rajan; Subhasis Chaudhuri; M.V. Joshi

Described methods for simultaneously generating the super-resolved depth map and the image from LR observations. Structural information is embedded within the observations and, through the two formulations of DFD and SFS problems, we were able to generate the super-resolved images and the structures. The first method described here avoids correspondence and warping problems inherent in current SR techniques involving the motion cue in the LR observations and uses a more natural depth-related defocus as a natural cue in real aperture imaging. The second method, while again avoiding the correspondence problems, also demonstrates the usefulness of the generalized interpolation scheme leading to more flexibility in the final SR image, in the sense that the LR image can be viewed at SR with an arbitrary light source position. The quality of the super-resolved depth and intensity maps has been found to be quite good. The MAP-MRF framework that was used in both methods models both the surface normal and the intensity field as separate MRFs, and this helps in regularizing the solution.


computer vision and pattern recognition | 2009

Random walks on graphs to model saliency in images

Viswanath Gopalakrishnan; Yiqun Hu; Deepu Rajan

We formulate the problem of salient region detection in images as Markov random walks performed on images represented as graphs. While the global properties of the image are extracted from the random walk on a complete graph, the local properties are extracted from a k-regular graph. The most salient node is selected as the one which is globally most isolated but falls on a compact object. The equilibrium hitting times of the ergodic Markov chain holds the key for identifying the most salient node. The background nodes which are farthest from the most salient node are also identified based on the hitting times calculated from the random walk. Finally, a seeded salient region identification mechanism is developed to identify the salient parts of the image. The robustness of the proposed algorithm is objectively demonstrated with experiments carried out on a large image database annotated with “ground-truth” salient regions.


computer vision and pattern recognition | 2013

Improving Image Matting Using Comprehensive Sampling Sets

Ehsan Shahrian; Deepu Rajan; Brian L. Price; Scott D. Cohen

In this paper, we present a new image matting algorithm that achieves state-of-the-art performance on a benchmark dataset of images. This is achieved by solving two major problems encountered by current sampling based algorithms. The first is that the range in which the foreground and background are sampled is often limited to such an extent that the true foreground and background colors are not present. Here, we describe a method by which a more comprehensive and representative set of samples is collected so as not to miss out on the true samples. This is accomplished by expanding the sampling range for pixels farther from the foreground or background boundary and ensuring that samples from each color distribution are included. The second problem is the overlap in color distributions of foreground and background regions. This causes sampling based methods to fail to pick the correct samples for foreground and background. Our design of an objective function forces those foreground and background samples to be picked that are generated from well-separated distributions. Comparison on the dataset at and evaluation by www.alphamatting.com shows that the proposed method ranks first in terms of error measures used in the website.


Journal of Mathematical Imaging and Vision | 2002

An MRF-Based Approach to Generation of Super-Resolution Images from Blurred Observations

Deepu Rajan; Subhasis Chaudhuri

This paper presents a new technique for generating a high resolution image from a blurred image sequence; this is also referred to as super-resolution restoration of images. The image sequence consists of decimated, blurred and noisy versions of the high resolution image. The high resolution image is modeled as a Markov random field (MRF) and a maximum a posteriori (MAP) estimation technique is used for super-resolution restoration. Unlike other super-resolution imaging methods, the proposed technique does not require sub-pixel registration of given observations. A simple gradient descent method is used to optimize the functional. The discontinuities in the intensity process can be preserved by introducing suitable line processes. Superiority of this technique to standard methods of image expansion like pixel replication and spline interpolation is illustrated.


british machine vision conference | 2013

Depth really Matters: Improving Visual Salient Region Detection with Depth.

Karthik Desingh; Madhava Krishna K; Deepu Rajan; C. V. Jawahar

Depth information has been shown to affect identification of visually salient regions in images. In this paper, we investigate the role of depth in saliency detection in the presence of (i) competing saliencies due to appearance, (ii) depth-induced blur and (iii) centre-bias. Having established through experiments that depth continues to be a significant contributor to saliency in the presence of these cues, we propose a 3D-saliency formulation that takes into account structural features of objects in an indoor setting to identify regions at salient depth levels. Computed 3D-saliency is used in conjunction with 2D-saliency models through non-linear regression using SVM to improve saliency maps. Experiments on benchmark datasets containing depth information show that the proposed fusion of 3D-saliency with 2D-saliency models results in an average improvement in ROC scores of about 9% over state-of-the-art 2D saliency models. The main contributions of this paper are: (i) The development of a 3D-saliency model that integrates depth and geometric features of object surfaces in indoor scenes. (ii) Fusion of appearance (RGB) saliency with depth saliency through non-linear regression using SVM. (iii) Experiments to support the hypothesis that depth improves saliency detection in the presence of blur and centre-bias. The effectiveness of the 3D-saliency model and its fusion with RGB-saliency is illustrated through experiments on two benchmark datasets that contain depth information. Current stateof-the-art saliency detection algorithms perform poorly on these datasets that depict indoor scenes due to the presence of competing saliencies in the form of color contrast. For example in Fig. 1, saliency maps of [1] is shown for different scenes, along with its human eye fixations and our proposed saliency map after fusion. It is seen from the first scene of Fig. 1, that illumination plays spoiler role in RGB-saliency map. In second scene of Fig. 1, the RGB-saliency is focused on the cap though multiple salient objects are present in the scene. Last scene at the bottom of Fig. 1, shows the limitation of the RGB-saliency when the object is similar in appearance with the background. Effect of depth on Saliency: In [4], it is shown that depth is an important cue for saliency. In this paper we go further and verify if the depth alone influences the saliency. Different scenes were captured for experimentation using Kinect sensor. Observations resulted out of these experiments are (i) Humans fixate on the objects at closer depth, in the presence of visually competing salient objects in the background, (ii) Early attention happens on the objects at closer depth, (iii) Effective fixations are high at the low contrast foreground compared to the high contrast objects in the background which are blurred, (iv) Low contrast object placed at the center of the field of view, gets more attention compared to other locations. As a result of all these observations, we develop a 3D-saliency that captures the depth information of the regions in the scene. 3D-Saliency: We adapt the region based contrast method from Cheng et al. [1] in computing contrast strengths for the segmented 3D surfaces or regions. Each segmented region is assigned a contrast score using surface normals as the feature. Structure of the surface can be described based on the distribution of normals in the region. We compute a histogram of angular distances formed by every pair of normals in the region. Every region Rk is associated with a histogram Hk. Contrast score Ck of a region Rk is computed as the sum of the dot products of its histogram with histograms of other regions in the scene. Since the depth of the region is influencing the visual attention, the contrast score is scaled by a value Zk, which is the depth of the region Rk from the sensor. In order to define the saliency, sizes of the regions i.e. the number of the points in the region, have to be considered. We find the ratio of the region dimension to the half of the scene dimension. Considering nk as the number of 3D points in the region Rk, the constrast score becomes Figure 1: Four different scenes and their saliency maps; For each scene from top left (i) Original Image, (ii) RGB-Saliency map using RC [1], (iii) Human fixations from eye-tracker and (iv) Fused RGBD-saliency map


computer vision and pattern recognition | 2012

Weighted color and texture sample selection for image matting

Ehsan Shahrian; Deepu Rajan

Color information is leveraged by color sampling-based matting methods to find the best known samples for foreground and background color of unknown pixels. Such methods do not perform well if there is an overlap in the color distribution of foreground and background regions because color cannot distinguish between these regions and hence, the selected samples cannot reliably estimate the matte. Similarly, alpha propagation based matting methods may fail when the affinity among neighboring pixels is reduced by strong edges. In this paper, we overcome these two problems by considering texture as a feature that can complement color to improve matting. The contribution of texture and color is automatically estimated by analyzing the content of the image. An objective function containing color and texture components is optimized to choose the best foreground and background pair among a set of candidate pairs. Experiments are carried out on a benchmark data set and an independent evaluation of the results show that the proposed method is ranked first among all other image matting methods.

Collaboration


Dive into the Deepu Rajan's collaboration.

Top Co-Authors

Avatar

Liang-Tien Chia

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Yiqun Hu

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Haoran Yi

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Subhasis Chaudhuri

Indian Institute of Technology Bombay

View shared research outputs
Top Co-Authors

Avatar

Hisham Cholakkal

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Jubin Johnson

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Alex Yong-Sang Chia

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Chai Quek

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Viswanath Gopalakrishnan

Nanyang Technological University

View shared research outputs
Researchain Logo
Decentralizing Knowledge