Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Aaron Sidford is active.

Publication


Featured researches published by Aaron Sidford.


foundations of computer science | 2013

Efficient Accelerated Coordinate Descent Methods and Faster Algorithms for Solving Linear Systems

Yin Tat Lee; Aaron Sidford

In this paper we show how to accelerate randomized coordinate descent methods and achieve faster convergence rates without paying per-iteration costs in asymptotic running time. In particular, we show how to generalize and efficiently implement a method proposed by Nesterov, giving faster asymptotic running times for various algorithms that use standard coordinate descent as a black box. In addition to providing a proof of convergence for this new general method, we show that it is numerically stable, efficiently implementable, and in certain regimes, asymptotically optimal. To highlight the power of this algorithm, we show how it can used to create faster linear system solvers in several regimes: - We show how this method achieves a faster asymptotic runtime than conjugate gradient for solving a broad class of symmetric positive definite systems of equations. - We improve the convergence guarantees for Kaczmarz methods, a popular technique for image reconstruction and solving over determined systems of equations, by accelerating an algorithm of Strohmer and Vershynin. - We achieve the best known running time for solving Symmetric Diagonally Dominant (SDD) system of equations in the unit-cost RAM model, obtaining a running time of O(m log3/2n (log log n)1/2 log((log n)/eps)) by accelerating a recent solver by Kelner et al. Beyond the independent interest of these solvers, we believe they highlight the versatility of the approach of this paper and we hope that they will open the door for further algorithmic improvements in the future.


conference on innovations in theoretical computer science | 2015

Uniform Sampling for Matrix Approximation

Michael B. Cohen; Yin Tat Lee; Cameron Musco; Christopher Musco; Richard Peng; Aaron Sidford

Random sampling has become a critical tool in solving massive matrix problems. For linear regression, a small, manageable set of data rows can be randomly selected to approximate a tall, skinny data matrix, improving processing time significantly. For theoretical performance guarantees, each row must be sampled with probability proportional to its statistical leverage score. Unfortunately, leverage scores are difficult to compute. A simple alternative is to sample rows uniformly at random. While this often works, uniform sampling will eliminate critical row information for many natural instances. We take a fresh look at uniform sampling by examining what information it does preserve. Specifically, we show that uniform sampling yields a matrix that, in some sense, well approximates a large fraction of the original. While this weak form of approximation is not enough for solving linear regression directly, it is enough to compute a better approximation. This observation leads to simple iterative row sampling algorithms for matrix approximation that run in input-sparsity time and preserve row structure and sparsity at all intermediate steps. In addition to an improved understanding of uniform sampling, our main proof introduces a structural result of independent interest: we show that every matrix can be made to have low coherence by reweighting a small subset of its rows.


foundations of computer science | 2015

A Faster Cutting Plane Method and its Implications for Combinatorial and Convex Optimization

Yin Tat Lee; Aaron Sidford; Sam Chiu-wai Wong

In this paper we improve upon the running time for finding a point in a convex set given a separation oracle. In particular, given a separation oracle for a convex set K ⊂ R<sup>n</sup> that is contained in a box of radius R we show how to either compute a point in K or prove that K does not contain a ball of radius ϵ using an expected O(n log(nR/ϵ)) evaluations of the oracle and additional time O(n<sup>3</sup> log<sup>O(1)</sup>(nR/ϵ)). This matches the oracle complexity and improves upon the O(n<sup>ω+1</sup> log(nR/ϵ)) additional time of the previous fastest algorithm achieved over 25 years ago by Vaidya [91] for the current value of the matrix multiplication constant w <; 2.373 [98], [36] when R/ϵ = O(poly(n)). Using a mix of standard reductions and new techniques we show how our algorithm can be used to improve the running time for solving classic problems in continuous and combinatorial optimization. In particular we provide the following running time improvements: · Submodular Function Minimization: n is the size of the ground set, M is the maximum absolute value of function values and EO is the time for function evaluation. Our weakly and strongly polynomial time algorithms have a running time of O(n<sup>2</sup> log nM · EO + n<sup>3</sup> log<sup>O(1)</sup> nM) and O(n<sup>3</sup> log<sup>2</sup> n · EO + n<sup>4</sup> log<sup>O(1)</sup> n), improving upon the previous best of O((n<sup>4</sup> · EO + n<sup>5</sup>)logM) and O(n<sup>5</sup> · EO + n<sup>6</sup>) respectively. · Submodular Flow: n = |V|, m = |E|, C is the maximum edge cost in absolute value and U is maximum edge capacity in absolute value. We obtain a faster weakly polynomial running time of O(n<sup>2</sup> log nCU · EO + n<sup>3</sup> logO(1) nCU), improving upon the previous best of O(mn<sup>5</sup> log nU · EO) and O (n<sup>4</sup>h min {log C, log U}) from 15 years ago by a factor of Õ(n<sup>4</sup>). We also achieve faster strongly polynomial time algorithms as a consequence of our result on submodular minimization. · Matroid Intersection: n is the size of the ground set, r is the maximum size of independent sets, M is the maximum absolute value of element weight, T<sub>rank</sub> and T<sub>ind</sub> are the time for each rank and independence oracle query. We obtain a running time of O((nr log<sup>2</sup> nT<sub>rank</sub>+n<sup>3</sup> log<sup>O(1)</sup> n) log nM) and O((n<sup>2</sup> log nT<sub>ind</sub>+n<sup>3</sup> log<sup>O(1)</sup> n) log nM), achieving the first quadratic bound on the query complexity for the independence and rank oracles. In the unweighted case, this is the first improvement since 1986 for independence oracle. · Semidefinite Programming: n is the number of constraints, m is the number of dimensions and S is the total number of non-zeros in the constraint matrices. We obtain a running time of O(n(n<sup>2</sup> + m<sup>ω</sup> + S)), improving upon the previous best of Õ(n(n<sup>ω</sup> + m<sup>ω</sup> + S)) for the regime S is small.


Siam Journal on Optimization | 2018

Accelerated Methods for NonConvex Optimization

Yair Carmon; John C. Duchi; Oliver Hinder; Aaron Sidford

We present an accelerated gradient method for nonconvex optimization problems with Lipschitz continuous first and second derivatives. In a time


foundations of computer science | 2014

Single Pass Spectral Sparsification in Dynamic Streams

Michael Kapralov; Yin Tat Lee; Cameron Musco; Christopher Musco; Aaron Sidford

O(\epsilon^{-7/4} \log(1/ \epsilon) )


symposium on the theory of computing | 2016

Geometric median in nearly linear time

Michael B. Cohen; Yin Tat Lee; Gary L. Miller; Jakub W. Pachocki; Aaron Sidford

, the method finds an


foundations of computer science | 2015

Efficient Inverse Maintenance and Faster Algorithms for Linear Programming

Yin Tat Lee; Aaron Sidford

\epsilon


conference on innovations in theoretical computer science | 2018

Spectrum Approximation Beyond Fast Matrix Multiplication: Algorithms and Hardness

Cameron Musco; Praneeth Netrapalli; Aaron Sidford; Shashanka Ubaru; David P. Woodruff

-stationary point, meaning a point


symposium on the theory of computing | 2017

Subquadratic submodular function minimization

Deeparnab Chakrabarty; Yin Tat Lee; Aaron Sidford; Sam Chiu-wai Wong

x


symposium on the theory of computing | 2016

Routing under balance

Alina Ene; Gary L. Miller; Jakub W. Pachocki; Aaron Sidford

such that

Collaboration


Dive into the Aaron Sidford's collaboration.

Top Co-Authors

Avatar

Yin Tat Lee

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Sham M. Kakade

University of Washington

View shared research outputs
Top Co-Authors

Avatar

Cameron Musco

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jonathan A. Kelner

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chi Jin

University of California

View shared research outputs
Top Co-Authors

Avatar

Michael B. Cohen

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Rahul Kidambi

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge