Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Naman Agarwal is active.

Publication


Featured researches published by Naman Agarwal.


symposium on the theory of computing | 2017

Finding approximate local minima faster than gradient descent

Naman Agarwal; Zeyuan Allen-Zhu; Brian Bullins; Elad Hazan; Tengyu Ma

We design a non-convex second-order optimization algorithm that is guaranteed to return an approximate local minimum in time which scales linearly in the underlying dimension and the number of training examples. The time complexity of our algorithm to find an approximate local minimum is even faster than that of gradient descent to find a critical point. Our algorithm applies to a general class of optimization problems including training a neural network and other non-convex objectives arising in machine learning.


arXiv: Data Structures and Algorithms | 2015

Multisection in the Stochastic Block Model Using Semidefinite Programming

Naman Agarwal; Afonso S. Bandeira; Konstantinos Koiliaris; Alexandra Kolla

We consider the problem of identifying underlying community-like structures in graphs. Toward this end, we study the stochastic block model (SBM) on k-clusters: a random model on n = km vertices, partitioned in k equal sized clusters, with edges sampled independently across clusters with probability q and within clusters with probability p, p > q. The goal is to recover the initial “hidden” partition of [n]. We study semidefinite programming (SDP)-based algorithms in this context. In the regime \(p = \frac {\alpha \log (m)}{m}\) and \(q = \frac {\beta \log (m)}{m}\), we show that a certain natural SDP-based algorithm solves the problem of exact recovery in the k-community SBM, with high probability, whenever \(\sqrt {\alpha } - \sqrt {\beta } > \sqrt {1}\), as long as \(k=o(\log n)\). This threshold is known to be the information theoretically optimal. We also study the case when \(k=\theta (\log (n))\). In this case however, we achieve recovery guarantees that no longer match the optimal condition \(\sqrt {\alpha } - \sqrt {\beta } > \sqrt {1}\), thus leaving achieving optimality for this range an open question.


international workshop and international workshop on approximation randomization and combinatorial optimization algorithms and techniques | 2017

On the Expansion of Group-Based Lifts

Naman Agarwal; Karthekeyan Chandrasekaran; Alexandra Kolla; Vivek Madan

A


arXiv: Machine Learning | 2016

Second Order Stochastic Optimization in Linear Time.

Naman Agarwal; Brian Bullins; Elad Hazan

k


Archive | 2016

Finding Approximate Local Minima for Nonconvex Optimization in Linear Time.

Naman Agarwal; Zeyuan Allen Zhu; Brian Bullins; Elad Hazan; Tengyu Ma

-lift of an


Journal of Machine Learning Research | 2017

Second-Order Stochastic Optimization for Machine Learning in Linear Time

Naman Agarwal; Brian Bullins; Elad Hazan

n


arXiv: Optimization and Control | 2016

Finding Local Minima for Nonconvex Optimization in Linear Time

Naman Agarwal; Zeyuan Allen-Zhu; Brian Bullins; Elad Hazan; Tengyu Ma

-vertex base graph


Chicago Journal of Theoretical Computer Science | 2015

Unique Games on the Hypercube

Naman Agarwal; Guy Kindler; Alexandra Kolla; Luca Trevisan

G


neural information processing systems | 2018

cpSGD: Communication-efficient and differentially-private distributed SGD

Naman Agarwal; Ananda Theertha Suresh; Felix X. Yu; Sanjiv Kumar; Brendan McMahan

is a graph


conference on learning theory | 2018

Lower Bounds for Higher-Order Convex Optimization

Naman Agarwal; Elad Hazan

H

Collaboration


Dive into the Naman Agarwal's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge