Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Suvrit Sra is active.

Publication


Featured researches published by Suvrit Sra.


international conference on machine learning | 2007

Information-theoretic metric learning

Jason V. Davis; Brian Kulis; Prateek Jain; Suvrit Sra; Inderjit S. Dhillon

In this paper, we present an information-theoretic approach to learning a Mahalanobis distance function. We formulate the problem as that of minimizing the differential relative entropy between two multivariate Gaussians under constraints on the distance function. We express this problem as a particular Bregman optimization problem---that of minimizing the LogDet divergence subject to linear constraints. Our resulting algorithm has several advantages over existing methods. First, our method can handle a wide variety of constraints and can optionally incorporate a prior on the distance function. Second, it is fast and scalable. Unlike most existing methods, no eigenvalue computations or semi-definite programming are required. We also present an online version and derive regret bounds for the resulting algorithm. Finally, we evaluate our method on a recent error reporting system for software called Clarify, in the context of metric learning for nearest neighbor classification, as well as on standard data sets.


computer vision and pattern recognition | 2010

Efficient filter flow for space-variant multiframe blind deconvolution

Michael Hirsch; Suvrit Sra; Bernhard Schölkopf; Stefan Harmeling

Ultimately being motivated by facilitating space-variant blind deconvolution, we present a class of linear transformations, that are expressive enough for space-variant filters, but at the same time especially designed for efficient matrix-vector-multiplications. Successful results on astronomical imaging through atmospheric turbulences and on noisy magnetic resonance images of constantly moving objects demonstrate the practical significance of our approach.


knowledge discovery and data mining | 2003

Generative model-based clustering of directional data

Arindam Banerjee; Inderjit S. Dhillon; Joydeep Ghosh; Suvrit Sra

High dimensional directional data is becoming increasingly important in contemporary applications such as analysis of text and gene-expression data. A natural model for multi-variate directional data is provided by the von Mises-Fisher (vMF) distribution on the unit hypersphere that is analogous to the multi-variate Gaussian distribution in Rd. In this paper, we propose modeling complex directional data as a mixture of vMF distributions. We derive and analyze two variants of the Expectation Maximization (EM) framework for estimating the parameters of this mixture. We also propose two clustering algorithms corresponding to these variants. An interesting aspect of our methodology is that the spherical kmeans algorithm (kmeans with cosine similarity) can be shown to be a special case of both our algorithms. Thus, modeling text data by vMF distributions lends theoretical validity to the use of cosine similarity which has been widely used by the information retrieval community. As part of experimental validation, we present results on modeling high-dimensional text and gene-expression data as a mixture of vMF distributions. The results indicate that our approach yields superior clusterings especially for difficult clustering tasks in high-dimensional spaces.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2013

Jensen-Bregman LogDet Divergence with Application to Efficient Similarity Search for Covariance Matrices

Anoop Cherian; Suvrit Sra; Arindam Banerjee; Nikolaos Papanikolopoulos

Covariance matrices have found success in several computer vision applications, including activity recognition, visual surveillance, and diffusion tensor imaging. This is because they provide an easy platform for fusing multiple features compactly. An important task in all of these applications is to compare two covariance matrices using a (dis)similarity function, for which the common choice is the Riemannian metric on the manifold inhabited by these matrices. As this Riemannian manifold is not flat, the dissimilarities should take into account the curvature of the manifold. As a result, such distance computations tend to slow down, especially when the matrix dimensions are large or gradients are required. Further, suitability of the metric to enable efficient nearest neighbor retrieval is an important requirement in the contemporary times of big data analytics. To alleviate these difficulties, this paper proposes a novel dissimilarity measure for covariances, the Jensen-Bregman LogDet Divergence (JBLD). This divergence enjoys several desirable theoretical properties and at the same time is computationally less demanding (compared to standard measures). Utilizing the fact that the square root of JBLD is a metric, we address the problem of efficient nearest neighbor retrieval on large covariance datasets via a metric tree data structure. To this end, we propose a K-Means clustering algorithm on JBLD. We demonstrate the superior performance of JBLD on covariance datasets from several computer vision applications.


international conference on computer vision | 2011

Efficient similarity search for covariance matrices via the Jensen-Bregman LogDet Divergence

Anoop Cherian; Suvrit Sra; Arindam Banerjee; Nikolaos Papanikolopoulos

Covariance matrices provide compact, informative feature descriptors for use in several computer vision applications, such as people-appearance tracking, diffusion-tensor imaging, activity recognition, among others. A key task in many of these applications is to compare different covariance matrices using a (dis)similarity function. A natural choice here is the Riemannian metric corresponding to the manifold inhabited by covariance matrices. But computations involving this metric are expensive, especially for large matrices and even more so, in gradient-based algorithms. To alleviate these difficulties, we advocate a novel dissimilarity measure for covariance matrices: the Jensen-Bregman LogDet Divergence. This divergence enjoys several useful theoretical properties, but its greatest benefits are: (i) lower computational costs (compared to standard approaches); and (ii) amenability for use in nearest-neighbor retrieval. We show numerous experiments to substantiate these claims.


SIAM Journal on Scientific Computing | 2010

Tackling Box-Constrained Optimization via a New Projected Quasi-Newton Approach

Dongmin Kim; Suvrit Sra; Inderjit S. Dhillon

Numerous scientific applications across a variety of fields depend on box-constrained convex optimization. Box-constrained problems therefore continue to attract research interest. We address box-constrained (strictly convex) problems by deriving two new quasi-Newton algorithms. Our algorithms are positioned between the projected-gradient [J. B. Rosen, J. SIAM, 8 (1960), pp. 181-217] and projected-Newton [D. P. Bertsekas, SIAM J. Control Optim., 20 (1982), pp. 221-246] methods. We also prove their convergence under a simple Armijo step-size rule. We provide experimental results for two particular box-constrained problems: nonnegative least squares (NNLS), and nonnegative Kullback-Leibler (NNKL) minimization. For both NNLS and NNKL our algorithms perform competitively as compared to well-established methods on medium-sized problems; for larger problems our approach frequently outperforms the competition.


european conference on computer vision | 2014

Riemannian Sparse Coding for Positive Definite Matrices

Anoop Cherian; Suvrit Sra

Inspired by the great success of sparse coding for vector valued data, our goal is to represent symmetric positive definite (SPD) data matrices as sparse linear combinations of atoms from a dictionary, where each atom itself is an SPD matrix. Since SPD matrices follow a non-Euclidean (in fact a Riemannian) geometry, existing sparse coding techniques for Euclidean data cannot be directly extended. Prior works have approached this problem by defining a sparse coding loss function using either extrinsic similarity measures (such as the log-Euclidean distance) or kernelized variants of statistical measures (such as the Stein divergence, Jeffrey’s divergence, etc.). In contrast, we propose to use the intrinsic Riemannian distance on the manifold of SPD matrices. Our main contribution is a novel mathematical model for sparse coding of SPD matrices; we also present a computationally simple algorithm for optimizing our model. Experiments on several computer vision datasets showcase superior classification and retrieval performance compared with state-of-the-art approaches.


international conference on image processing | 2010

Multiframe blind deconvolution, super-resolution, and saturation correction via incremental EM

Stefan Harmeling; Suvrit Sra; Michael Hirsch; Bernhard Schölkopf

We formulate the multiframe blind deconvolution problem in an incremental expectation maximization (EM) framework. Beyond deconvolution, we show how to use the same framework to address: (i) super-resolution despite noise and unknown blurring; (ii) saturation-correction of overexposed pixels that confound image restoration. The abundance of data allows us to address both of these without using explicit image or blur priors. The end result is a simple but effective algorithm with no hyperparameters. We apply this algorithm to real-world images from astronomy and to super resolution tasks: for both, our algorithm yields increased resolution and deconvolved images simultaneously.


Siam Journal on Optimization | 2015

Conic Geometric Optimization on the Manifold of Positive Definite Matrices

Suvrit Sra; Reshad Hosseini

We develop geometric optimization on the manifold of Hermitian positive definite (HPD) matrices. In particular, we consider optimizing two types of cost functions: (i) geodesically convex (g-convex) and (ii) log-nonexpansive (LN). G-convex functions are nonconvex in the usual Euclidean sense but convex along the manifold and thus allow global optimization. LN functions may fail to be even g-convex but still remain globally optimizable due to their special structure. We develop theoretical tools to recognize and generate g-convex functions as well as cone theoretic fixed-point optimization algorithms. We illustrate our techniques by applying them to maximum-likelihood parameter estimation for elliptically contoured distributions (a rich class that substantially generalizes the multivariate normal distribution). We compare our fixed-point algorithms with sophisticated manifold optimization methods and obtain notable speedups.


Astronomy and Astrophysics | 2011

Online multi-frame blind deconvolution with super-resolution and saturation correction

Michael Hirsch; Stefan Harmeling; Suvrit Sra; Bernhard Schölkopf

Astronomical images taken by ground-based telescopes suffer degradation due to atmospheric turbulence. This degradation can be tackled by costly hardware-based approaches such as adaptive optics, or by sophisticated software-based methods such as lucky imaging, speckle imaging, or multi-frame deconvolution. Software-based methods process a sequence of images to reconstruct a deblurred high-quality image. However, existing approaches are limited in one or several aspects: (i) they process all images in batch mode, which for thousands of images is prohibitive; (ii) they do not reconstruct a super-resolved image, even though an image sequence often contains enough information; (iii) they are unable to deal with saturated pixels; and (iv) they are usually non-blind, i.e., they assume the blur kernels to be known. In this paper we present a new method for multi-frame deconvolution called online blind deconvolution (OBD) that overcomes all these limitations simultaneously. Encouraging results on simulated and real astronomical images demonstrate that OBD yields deblurred images of comparable and often better quality than existing approaches.

Collaboration


Dive into the Suvrit Sra's collaboration.

Top Co-Authors

Avatar

Inderjit S. Dhillon

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Stefanie Jegelka

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Dongmin Kim

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Anoop Cherian

Australian National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sashank J. Reddi

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Chengtao Li

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Barnabás Póczos

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Zelda Mariet

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge