Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Praneeth Netrapalli is active.

Publication


Featured researches published by Praneeth Netrapalli.


symposium on the theory of computing | 2013

Low-rank matrix completion using alternating minimization

Prateek Jain; Praneeth Netrapalli; Sujay Sanghavi

Alternating minimization represents a widely applicable and empirically successful approach for finding low-rank matrices that best fit the given data. For example, for the problem of low-rank matrix completion, this method is believed to be one of the most accurate and efficient, and formed a major component of the winning entry in the Netflix Challenge [17]. In the alternating minimization approach, the low-rank target matrix is written in a bi-linear form, i.e. X = UV†; the algorithm then alternates between finding the best U and the best V. Typically, each alternating step in isolation is convex and tractable. However the overall problem becomes non-convex and is prone to local minima. In fact, there has been almost no theoretical understanding of when this approach yields a good result. In this paper we present one of the first theoretical analyses of the performance of alternating minimization for matrix completion, and the related problem of matrix sensing. For both these problems, celebrated recent results have shown that they become well-posed and tractable once certain (now standard) conditions are imposed on the problem. We show that alternating minimization also succeeds under similar conditions. Moreover, compared to existing results, our paper shows that alternating minimization guarantees faster (in particular, geometric) convergence to the true matrix, while allowing a significantly simpler analysis.


IEEE Transactions on Signal Processing | 2015

Phase Retrieval Using Alternating Minimization

Praneeth Netrapalli; Prateek Jain; Sujay Sanghavi

Phase retrieval problems involve solving linear equations, but with missing sign (or phase, for complex numbers) information. More than four decades after it was first proposed, the seminal error reduction algorithm of Gerchberg and Saxton and Fienup is still the popular choice for solving many variants of this problem. The algorithm is based on alternating minimization; i.e., it alternates between estimating the missing phase information, and the candidate solution. Despite its wide usage in practice, no global convergence guarantees for this algorithm are known. In this paper, we show that a (resampling) variant of this approach converges geometrically to the solution of one such problem-finding a vector x from y, A, where y = |ATx| and |z| denotes a vector of element-wise magnitudes of z-under the assumption that A is Gaussian. Empirically, we demonstrate that alternating minimization performs similar to recently proposed convex techniques for this problem (which are based on “lifting” to a convex matrix problem) in sample complexity and robustness to noise. However, it is much more efficient and can scale to large problems. Analytically, for a resampling version of alternating minimization, we show geometric convergence to the solution, and sample complexity that is off by log factors from obvious lower bounds. We also establish close to optimal scaling for the case when the unknown vector is sparse. Our work represents the first theoretical guarantee for alternating minimization (albeit with resampling) for any variant of phase retrieval problems in the non-convex setting.


Siam Journal on Optimization | 2016

Learning Sparsely Used Overcomplete Dictionaries via Alternating Minimization

Alekh Agarwal; Animashree Anandkumar; Prateek Jain; Praneeth Netrapalli

We consider the problem of sparse coding, where each sample consists of a sparse linear combination of a set of dictionary atoms, and the task is to learn both the dictionary elements and the mixing coefficients. Alternating minimization is a popular heuristic for sparse coding, where the dictionary and the coefficients are estimated in alternate steps, keeping the other fixed. Typically, the coefficients are estimated via


allerton conference on communication, control, and computing | 2010

Greedy learning of Markov network structure

Praneeth Netrapalli; Siddhartha Banerjee; Sujay Sanghavi; Sanjay Shakkottai

\ell_1


international symposium on information theory | 2012

Learning Markov graphs up to edit distance

Abhik Kumar Das; Praneeth Netrapalli; Sujay Sanghavi; Sriram Vishwanath

minimization, keeping the dictionary fixed, and the dictionary is estimated through least squares, keeping the coefficients fixed. In this paper, we establish local linear convergence for this variant of alternating minimization and establish that the basin of attraction for the global optimum (corresponding to the true dictionary and the coefficients) is


conference on innovations in theoretical computer science | 2018

Spectrum Approximation Beyond Fast Matrix Multiplication: Algorithms and Hardness

Cameron Musco; Praneeth Netrapalli; Aaron Sidford; Shashanka Ubaru; David P. Woodruff

\order{1/s^2}


international symposium on information theory | 2014

Learning structure of power-law Markov networks

Abhik Kumar Das; Praneeth Netrapalli; Sujay Sanghavi; Sriram Vishwanath

, where


measurement and modeling of computer systems | 2012

Learning the graph of epidemic cascades

Praneeth Netrapalli; Sujay Sanghavi

s


neural information processing systems | 2013

Phase Retrieval using Alternating Minimization

Praneeth Netrapalli; Prateek Jain; Sujay Sanghavi

is the sparsity level in each sample and the dictionary satisfies RIP. Combined with the recent results of approximate dictionary estimation, this yields provable guarantees for exact recovery of both the dictionary elements and the coefficients, when the dictionary elements are incoherent.


neural information processing systems | 2014

Non-convex Robust PCA

Praneeth Netrapalli; Niranjan U N; Sujay Sanghavi; Animashree Anandkumar; Prateek Jain

Markov Random Fields (MRFs), a.k.a. Graphical Models, serve as popular models for networks in the social and biological sciences, as well as communications and signal processing. A central problem is one of structure learning or model selection: given samples from the MRF, determine the graph structure of the underlying distribution. When the MRF is not Gaussian (e.g. the Ising model) and contains cycles, structure learning is known to be NP hard even with infinite samples. Existing approaches typically focus either on specific parametric classes of models, or on the sub-class of graphs with bounded degree; the complexity of many of these methods grows quickly in the degree bound. We develop a simple new ‘greedy’ algorithm for learning the structure of graphical models of discrete random variables. It learns the Markov neighborhood of a node by sequentially adding to it the node that produces the highest reduction in conditional entropy. We provide a general sufficient condition for exact structure recovery (under conditions on the degree/girth/correlation decay), and study its sample and computational complexity. We then consider its implications for the Ising model, for which we establish a self-contained condition for exact structure recovery.

Collaboration


Dive into the Praneeth Netrapalli's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sham M. Kakade

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sujay Sanghavi

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Chi Jin

University of California

View shared research outputs
Top Co-Authors

Avatar

Rahul Kidambi

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cameron Musco

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Abhik Kumar Das

University of Texas at Austin

View shared research outputs
Researchain Logo
Decentralizing Knowledge