Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Venkat Chandrasekaran is active.

Publication


Featured researches published by Venkat Chandrasekaran.


Siam Journal on Optimization | 2011

RANK-SPARSITY INCOHERENCE FOR MATRIX DECOMPOSITION *

Venkat Chandrasekaran; Sujay Sanghavi; Pablo A. Parrilo; Alan S. Willsky

Suppose we are given a matrix that is formed by adding an unknown sparse matrix to an unknown low-rank matrix. Our goal is to decompose the given matrix into its sparse and low-rank components. Suc...


Foundations of Computational Mathematics | 2012

The Convex Geometry of Linear Inverse Problems

Venkat Chandrasekaran; Benjamin Recht; Pablo A. Parrilo; Alan S. Willsky

In applications throughout science and engineering one is often faced with the challenge of solving an ill-posed inverse problem, where the number of available measurements is smaller than the dimension of the model to be estimated. However in many practical situations of interest, models are constrained structurally so that they only have a few degrees of freedom relative to their ambient dimension. This paper provides a general framework to convert notions of simplicity into convex penalty functions, resulting in convex optimization solutions to linear, underdetermined inverse problems. The class of simple models considered includes those formed as the sum of a few atoms from some (possibly infinite) elementary atomic set; examples include well-studied cases from many technical fields such as sparse vectors (signal processing, statistics) and low-rank matrices (control, statistics), as well as several others including sums of a few permutation matrices (ranked elections, multiobject tracking), low-rank tensors (computer vision, neuroscience), orthogonal matrices (machine learning), and atomic measures (system identification). The convex programming formulation is based on minimizing the norm induced by the convex hull of the atomic set; this norm is referred to as the atomic norm. The facial structure of the atomic norm ball carries a number of favorable properties that are useful for recovering simple models, and an analysis of the underlying convex geometry provides sharp estimates of the number of generic measurements required for exact and robust recovery of models from partial information. These estimates are based on computing the Gaussian widths of tangent cones to the atomic norm ball. When the atomic set has algebraic structure the resulting optimization problems can be solved or approximated via semidefinite programming. The quality of these approximations affects the number of measurements required for recovery, and this tradeoff is characterized via some examples. Thus this work extends the catalog of simple models (beyond sparse vectors and low-rank matrices) that can be recovered from limited linear information via tractable convex programming.


allerton conference on communication, control, and computing | 2009

Sparse and low-rank matrix decompositions

Venkat Chandrasekaran; Sujay Sanghavi; Pablo A. Parrilo; Alan S. Willsky

We consider the following fundamental problem: given a matrix that is the sum of an unknown sparse matrix and an unknown low-rank matrix, is it possible to exactly recover the two components? Such a capability enables a considerable number of applications, but the goal is both ill-posed and NP-hard in general. In this paper we develop (a) a new uncertainty principle for matrices, and (b) a simple method for exact decomposition based on convex optimization. Our uncertainty principle is a quantification of the notion that a matrix cannot be sparse while having diffuse row/column spaces. It characterizes when the decomposition problem is ill-posed, and forms the basis for our decomposition method and its analysis. We provide deterministic conditions — on the sparse and low-rank components — under which our method guarantees exact recovery.


IEEE Transactions on Signal Processing | 2008

Estimation in Gaussian Graphical Models Using Tractable Subgraphs: A Walk-Sum Analysis

Venkat Chandrasekaran; Jason K. Johnson; Alan S. Willsky

Graphical models provide a powerful formalism for statistical signal processing. Due to their sophisticated modeling capabilities, they have found applications in a variety of fields such as computer vision, image processing, and distributed sensor networks. In this paper, we present a general class of algorithms for estimation in Gaussian graphical models with arbitrary structure. These algorithms involve a sequence of inference problems on tractable subgraphs over subsets of variables. This framework includes parallel iterations such as embedded trees, serial iterations such as block Gauss-Seidel, and hybrid versions of these iterations. We also discuss a method that uses local memory at each node to overcome temporary communication failures that may arise in distributed sensor network applications. We analyze these algorithms based on the recently developed walk-sum interpretation of Gaussian inference. We describe the walks ldquocomputedrdquo by the algorithms using walk-sum diagrams, and show that for iterations based on a very large and flexible set of sequences of subgraphs, convergence is guaranteed in walk-summable models. Consequently, we are free to choose spanning trees and subsets of variables adaptively at each iteration. This leads to efficient methods for optimizing the next iteration step to achieve maximum reduction in error. Simulation results demonstrate that these nonstationary algorithms provide a significant speedup in convergence over traditional one-tree and two-tree iterations.


allerton conference on communication, control, and computing | 2010

The Convex algebraic geometry of linear inverse problems

Venkat Chandrasekaran; Benjamin Recht; Pablo A. Parrilo; Alan S. Willsky

We study a class of ill-posed linear inverse problems in which the underlying model of interest has simple algebraic structure. We consider the setting in which we have access to a limited number of linear measurements of the underlying model, and we propose a general framework based on convex optimization in order to recover this model. This formulation generalizes previous methods based on `1-norm minimization and nuclear norm minimization for recovering sparse vectors and low-rank matrices from a small number of linear measurements. For example some problems to which our framework is applicable include (1) recovering an orthogonal matrix from limited linear measurements, (2) recovering a measure given random linear combinations of its moments, and (3) recovering a low-rank tensor from limited linear observations.


IEEE Transactions on Information Theory | 2009

Representation and Compression of Multidimensional Piecewise Functions Using Surflets

Venkat Chandrasekaran; Michael B. Wakin; Dror Baron; Richard G. Baraniuk

We study the representation, approximation, and compression of functions in M dimensions that consist of constant or smooth regions separated by smooth (M-1)-dimensional discontinuities. Examples include images containing edges, video sequences of moving objects, and seismic data containing geological horizons. For both function classes, we derive the optimal asymptotic approximation and compression rates based on Kolmogorov metric entropy. For piecewise constant functions, we develop a multiresolution predictive coder that achieves the optimal rate-distortion performance; for piecewise smooth functions, our coder has near-optimal rate-distortion performance. Our coder for piecewise constant functions employs surflets, a new multiscale geometric tiling consisting of M-dimensional piecewise constant atoms containing polynomial discontinuities. Our coder for piecewise smooth functions uses surfprints, which wed surflets to wavelets for piecewise smooth approximation. Both of these schemes achieve the optimal asymptotic approximation performance. Key features of our algorithms are that they carefully control the potential growth in surflet parameters at higher smoothness and do not require explicit estimation of the discontinuity. We also extend our results to the corresponding discrete function spaces for sampled data. We provide asymptotic performance results for both discrete function spaces and relate this asymptotic performance to the sampling rate and smoothness orders of the underlying functions and discontinuities. For approximation of discrete data, we propose a new scale-adaptive dictionary that contains few elements at coarse and fine scales, but many elements at medium scales. Simulation results on synthetic signals provide a comparison between surflet-based coders and previously studied approximation schemes based on wedgelets and wavelets.


SIAM Journal on Matrix Analysis and Applications | 2012

Diagonal and Low-Rank Matrix Decompositions, Correlation Matrices, and Ellipsoid Fitting

James Saunderson; Venkat Chandrasekaran; Pablo A. Parrilo; Alan S. Willsky

In this paper we establish links between, and new results for, three problems that are not usually considered together. The first is a matrix decomposition problem that arises in areas such as statistical modeling and signal processing: given a matrix


conference on decision and control | 2014

Regularization for design

Nikolai Matni; Venkat Chandrasekaran

X


IFAC Proceedings Volumes | 2009

Sparse and Low-Rank Matrix Decompositions

Venkat Chandrasekaran; Sujay Sanghavi; Pablo A. Parrilo; Alan S. Willsky

formed as the sum of an unknown diagonal matrix and an unknown low-rank positive semidefinite matrix, decompose


Mathematical Programming | 2017

Relative entropy optimization and its applications

Venkat Chandrasekaran; Parikshit Shah

X

Collaboration


Dive into the Venkat Chandrasekaran's collaboration.

Top Co-Authors

Avatar

Alan S. Willsky

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Pablo A. Parrilo

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Parikshit Shah

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Armeen Taeb

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jason K. Johnson

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Myung Jin Choi

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Sujay Sanghavi

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Dror Baron

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge