Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dheeraj Singaraju is active.

Publication


Featured researches published by Dheeraj Singaraju.


computer vision and pattern recognition | 2009

New appearance models for natural image matting

Dheeraj Singaraju; Carsten Rother; Christoph Rhemann

Image matting is the task of estimating a fore- and background layer from a single image. To solve this ill posed problem, an accurate modeling of the scenes appearance is necessary. Existing methods that provide a closed form solution to this problem, assume that the colors of the foreground and background layers are locally linear. In this paper, we show that such models can be an overfit when the colors of the two layers are locally constant. We derive new closed form expressions in such cases, and show that our models are more compact than existing ones. In particular, the null space of our cost function is a subset of the null space constructed by existing approaches. We discuss the bias towards specific solutions for each formulation. Experiments on synthetic and real data confirm that our compact models estimate alpha mattes more accurately than existing techniques, without the need of additional user interaction.


computer vision and pattern recognition | 2009

P-brush: Continuous valued MRFs with normed pairwise distributions for image segmentation

Dheeraj Singaraju; Leo Grady; René Vidal

Interactive image segmentation traditionally involves the use of algorithms such as graph cuts or random walker. Common concerns with using graph cuts are metrication artifacts (blockiness) and the shrinking bias (bias towards shorter boundaries). The random walker avoids these problems, but suffers from the proximity bias (sensitivity to location of pixels labeled by the user). In this work, we introduce a new family of segmentation algorithms that includes graph cuts and random walker as special cases. We explore image segmentation using continuous-valued Markov random fields (MRFs) with probability distributions following the p-norm of the difference between configurations of neighboring sites. For p=1 these MRFs may be interpreted as the standard binary MRF used by graph cuts, while for p=2 these MRFs may be viewed as Gaussian MRFs employed by the random walker algorithm. By allowing the probability distribution for neighboring sites to take any arbitrary p-norm (p ≥ 1), we pave the path for hybrid extensions of these algorithms. Experiments show that the use of a fractional p (1 <; p <; 2) can be used to resolve the aforementioned drawbacks of these algorithms.


computer vision and pattern recognition | 2007

Projective Factorization of Multiple Rigid-Body Motions

Ting Li; Vinutha Kallem; Dheeraj Singaraju; René Vidal

Given point correspondences in multiple perspective views of a scene containing multiple rigid-body motions, we present an algorithm for segmenting the correspondences according to the multiple motions. We exploit the fact that when the depths of the points are known, the point trajectories associated with a single motion live in a subspace of dimension at most four. Thus motion segmentation with known depths can be achieved by methods of subspace separation, such as GPCA or LSA. When the depths are unknown, we proceed iteratively. Given the segmentation, we compute the depths using standard techniques. Given the depths, we use GPCA or LSA to segment the scene into multiple motions. Experiments on the Hopkins 155 motion segmentation database show that our method compares favorably against existing affine motion segmentation methods in terms of segmentation error and execution time.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2011

Estimation of Alpha Mattes for Multiple Image Layers

Dheeraj Singaraju; René Vidal

Image matting deals with the estimation of the alpha matte at each pixel, i.e., the contribution of the foreground and background objects to the composition of the image at that pixel. Existing methods for image matting are typically limited to estimating the alpha mattes for two image layers only. However, in several applications one is interested in editing images with multiple objects. In this work, we consider the problem of estimating the alpha mattes of multiple (n ≥ 2) image layers. We show that this problem can be decomposed into n simpler subproblems of alpha matte estimation for two image layers. Moreover, we show that, by construction, the estimated alpha mattes at each pixel are constrained to sum up to 1 across the multiple image layers. A key feature of our framework is that the alpha mattes can be estimated in closed form. We further show that, due to the nature of spatial regularization used in the estimation, the final estimated alpha mattes are not constrained to take values in [0, 1]. Hence, we study the optimization problem of estimating the alpha mattes for multiple image layers subject to the fact that the alpha mattes are nonnegative and sum up to 1 at each pixel. We present experiments to show that our proposed method can be used to extract mattes of multiple image layers.


computer vision and pattern recognition | 2008

Interactive image matting for multiple layers

Dheeraj Singaraju; René Vidal

Image matting deals with finding the probability that each pixel in an image belongs to a user specified dasiaobjectpsila or to the remaining dasiabackgroundpsila. Most existing methods estimate the mattes for two groups only. Moreover, most of these methods estimate the mattes with a particular bias towards the object and hence the resulting mattes do not sum up to 1 across the different groups. In this work, we propose a general framework to estimate the alpha mattes for multiple image layers. The mattes are estimated as the solution to the Dirichlet problem on a combinatorial graph with boundary conditions. We consider the constrained optimization problem that enforces the alpha mattes to take values in [0; 1] and sum up to 1 at each pixel. We also analyze the properties of the solution obtained by relaxing either of the two constraints. Experiments demonstrate that our proposed method can be used to extract accurate mattes of multiple objects with little user interaction.


computer vision and pattern recognition | 2008

Interactive image segmentation via minimization of quadratic energies on directed graphs

Dheeraj Singaraju; Leo Grady; René Vidal

We propose a scheme to introduce directionality in the random walker algorithm for image segmentation. In particular, we extend the optimization framework of this algorithm to combinatorial graphs with directed edges. Our scheme is interactive and requires the user to label a few pixels that are representative of a foreground object and of the background. These labeled pixels are used to learn intensity models for the object and the background, which allow us to automatically set the weights of the directed edges. These weights are chosen so that they bias the direction of the object boundary gradients to flow from regions that agree well with the learned object intensity model to regions that do not agree well. We use these weights to define an energy function that associates asymmetric quadratic penalties with the edges in the graph. We show that this energy function is convex, hence it has a unique minimizer. We propose a provably convergent iterative algorithm for minimizing this energy function. We also describe the construction of an equivalent electrical network with diodes and resistors that solves the same segmentation problem as our framework. Finally, our experiments on a database of 69 images show that the use of directional information does improve the segmenting power of the random Walker algorithm.


computer vision and pattern recognition | 2005

A closed form solution to direct motion segmentation

René Vidal; Dheeraj Singaraju

We present a closed form solution to the problem of segmenting multiple 2D motion models of the same type directly from the partial derivatives of an image sequence. We introduce the multibody brightness constancy constraint (MBCC), a polynomial equation relating motion models, image derivatives and pixel coordinates that is independent of the segmentation of the image measurements. We first show that the optical flow at a pixel can be obtained analytically as the derivative of the MBCC at the corresponding image measurement, without knowing the motion model associated with that pixel. We then show that the parameters of the multiple motion models can be obtained from the cross products of the derivatives of the MBCC at a set of image measurements that minimize a suitable distance function. Our approach requires no feature tracking, point correspondences or optical flow, and provides a global non-iterative solution that can be used to initialize more expensive iterative approaches to motion segmentation. Experiments on real and synthetic sequences are also presented.


asian conference on computer vision | 2012

Using models of objects with deformable parts for joint categorization and segmentation of objects

Nikhil Naikal; Dheeraj Singaraju; Shankar Sastry

Several formulations based on Random Fields (RFs) have been proposed for joint categorization and segmentation (JCaS) of objects in images. The RFs sites correspond to pixels or superpixels of an image and one defines potential functions (typically over local neighborhoods) which define costs for the different possible assignments of labels to several different sites. Since the segmentation is unknown a priori, one cannot define potential functions over arbitrarily large neighborhoods as that may cross object boundaries. Categorization algorithms extract a set of interest points from the entire image and solve the categorization problem by optimizing cost functions that depend on the feature descriptors extracted from these interest points. There is some disconnect between segmentation algorithms which consider local neighborhoods and categorization algorithms which consider non-local neighborhoods. In this work, we propose to bridge this gap by introducing a novel formulation which uses models of objects with deformable parts, classically used for object categorization, to solve the JCaS problem. We use these models to introduce two new classes of potential functions for JCaS; (a) the first class of potential functions encodes the model score for detecting an object as a function of its visible parts only, and (b) the second class of potential functions encodes shape priors for each visible part and is used to bias the segmentation of the pixels in the support region of the part, towards the foreground object label. We show that most existing deformable parts formulations can be used to define these potential functions and that the resulting potential functions can be optimized exactly using min-cut. As a result, these new potential functions can be integrated with most existing RF-based formulations for JCaS.


asian conference on computer vision | 2006

A bottom up algebraic approach to motion segmentation

Dheeraj Singaraju; René Vidal

We present a bottom up algebraic approach for segmenting multiple 2D motion models directly from the partial derivatives of an image sequence. Our method fits a polynomial called the multibody brightness constancy constraint (MBCC) to a window around each pixel of the scene and obtains a local motion model from the derivatives of the MBCC. These local models are then clustered to obtain the parameters of the motion models for the entire scene. Motion segmentation is obtained by assigning to each pixel the dominant motion model in a window around it. Our approach requires no initialization, can handle multiple motions in a window (thus dealing with the aperture problem) and automatically incorporates spatial regularization. Therefore, it naturally combines the advantages of both local and global approaches to motion segmentation. Experiments on real data compare our method with previous local and global approaches.


international conference on acoustics, speech, and signal processing | 2012

On the Lagrangian biduality of sparsity minimization problems

Dheeraj Singaraju; Roberto Tron; Ehsan Elhamifar; Allen Y. Yang; Shankar Sastry

We present a novel primal-dual analysis on a class of NP-hard sparsity minimization problems to provide new interpretations for their well known convex relaxations. We show that the Lagrangian bidual (i.e., the Lagrangian dual of the Lagrangian dual) of the sparsity minimization problems can be used to derive interesting convex relaxations: the bidual of the ℓ<sub>0</sub>-minimization problem is ℓ<sub>1</sub>-minimization; and the bidual of ℓ<sub>0,1</sub>-minimization for enforcing group sparsity on structured data is ℓ<sub>1,∞</sub>-minimization problem. Intuitions from the bidual-based relaxation are used to introduce a new family of relaxations for the group sparsity minimization problem.

Collaboration


Dive into the Dheeraj Singaraju's collaboration.

Top Co-Authors

Avatar

René Vidal

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shankar Sastry

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ting Li

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Allen Y. Yang

University of California

View shared research outputs
Top Co-Authors

Avatar

Nikhil Naikal

University of California

View shared research outputs
Top Co-Authors

Avatar

Roberto Tron

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Vinutha Kallem

University of Pennsylvania

View shared research outputs
Researchain Logo
Decentralizing Knowledge