Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lorenzo Orecchia is active.

Publication


Featured researches published by Lorenzo Orecchia.


symposium on the theory of computing | 2008

On partitioning graphs via single commodity flows

Lorenzo Orecchia; Leonard J. Schulman; Umesh V. Vazirani; Nisheeth K. Vishnoi

In this paper we obtain improved upper and lower bounds for the best approximation factor for Sparsest Cut achievable in the cut-matching game framework proposed in Khandekar et al. [9]. We show that this simple framework can be used to design combinatorial algorithms that achieve O(log n) approximation factor and whose running time is dominated by a poly-logarithmic number of single-commodity max-flow computations. This matches the performance of the algorithm of Arora and Kale [2]. Moreover, we also show that it is impossible to get an approximation factor of better than Ω(√log n) in the cut-matching game framework. These results suggest that the simple and concrete abstraction of the cut-matching game may be powerful enough to capture the essential features of the complexity of Sparsest Cut.


conference on innovations in theoretical computer science | 2017

Linear Coupling: An Ultimate Unification of Gradient and Mirror Descent

Lorenzo Orecchia; Zeyuan Allen-Zhu

First-order methods play a central role in large-scale machine learning. Even though many variations exist, each suited to a particular problem, almost all such methods fundamentally rely on two types of algorithmic steps: gradient descent, which yields primal progress, and mirror descent, which yields dual progress. nWe observe that the performances of gradient and mirror descent are complementary, so that faster algorithms can be designed by LINEARLY COUPLING the two. We show how to reconstruct Nesterovs accelerated gradient methods using linear coupling, which gives a cleaner interpretation than Nesterovs original proofs. We also discuss the power of linear coupling by extending it to many other settings that Nesterovs methods cannot apply to.


symposium on the theory of computing | 2015

Nearly-Linear Time Positive LP Solver with Faster Convergence Rate

Zeyuan Allen-Zhu; Lorenzo Orecchia

Positive linear programs (LP), also known as packing and covering linear programs, are an important class of problems that bridges computer science, operation research, and optimization. Efficient algorithms for solving such LPs have received significant attention in the past 20 years [2, 3, 4, 6, 7, 9, 11, 15, 16, 18, 19, 21, 24, 25, 26, 29, 30]. Unfortunately, all known nearly-linear time algorithms for producing (1+ε)-approximate solutions to positive LPs have a running time dependence that is at least proportional to ε-2. This is also known as an O(1/√T) convergence rate and is particularly poor in many applications. In this paper, we leverage insights from optimization theory to break this longstanding barrier. Our algorithms solve the packing LP in time ~O(N ε-1) and the covering LP in time ~O(N ε-1.5). At high level, they can be described as linear couplings of several first-order descent steps. This is the first application of our linear coupling technique (see [1]) to problems that are not amenable to blackbox applications known iterative algorithms in convex optimization. Our work also introduces a sequence of new techniques, including the stochastic and the non-symmetric execution of gradient truncation operations, which may be of independent interest.


symposium on the theory of computing | 2015

Spectral Sparsification and Regret Minimization Beyond Matrix Multiplicative Updates

Zeyuan Allen-Zhu; Zhenyu Liao; Lorenzo Orecchia

In this paper, we provide a novel construction of the linear-sized spectral sparsifiers of Batson, Spielman and Srivastava [11]. While previous constructions required Ω(n4) running time [11, 45], our sparsification routine can be implemented in almost-quadratic running time O(n2+ε). The fundamental conceptual novelty of our work is the leveraging of a strong connection between sparsification and a regret minimization problem over density matrices. This connection was known to provide an interpretation of the randomized sparsifiers of Spielman and Srivastava [39] via the application of matrix multiplicative weight updates (MWU) [17, 43]. In this paper, we explain how matrix MWU naturally arises as an instance of the Follow-the-Regularized-Leader framework and generalize this approach to yield a larger class of updates. This new class allows us to accelerate the construction of linear-sized spectral sparsifiers, and give novel insights on the motivation behind Batson, Spielman and Srivastava [11].


symposium on experimental and efficient algorithms | 2009

Empirical Evaluation of Graph Partitioning Using Spectral Embeddings and Flow

Kevin J. Lang; Michael W. Mahoney; Lorenzo Orecchia

We present initial results from the first empirical evaluation of a graph partitioning algorithm inspired by the Arora-Rao-Vazirani algorithm of [5], which combines spectral and flow methods in a novel way. We have studied the parameter space of this new algorithm, e.g. , examining the extent to which different parameter settings interpolate between a more spectral and a more flow-based approach, and we have compared results of this algorithm to results from previously known and optimized algorithms such as Metis .


conference on innovations in theoretical computer science | 2018

Accelerated Extra-Gradient Descent: A Novel Accelerated First-Order Method.

Jelena Diakonikolas; Lorenzo Orecchia

We provide a novel accelerated first-order method that achieves the asymptotically optimal convergence rate for smooth functions in the first-order oracle model. To this day, Nesterovs Accelerated Gradient Descent (AGD) and variations thereof were the only methods achieving acceleration in this standard blackbox model. In contrast, our algorithm is significantly different from AGD, as it relies on a predictor-corrector approach similar to that used by Mirror-Prox [Nemirovski, 2004] and Extra-Gradient Descent [Korpelevich, 1977] in the solution of convex-concave saddle point problems. For this reason, we dub our algorithm Accelerated Extra-Gradient Descent (AXGD). Its construction is motivated by the discretization of an accelerated continuous-time dynamics [Krichene et al., 2015] using the classical method of implicit Euler discretization. Our analysis explicitly shows the effects of discretization through a conceptually novel primal-dual viewpoint. Moreover, we show that the method is quite general: it attains optimal convergence rates for other classes of objectives (e.g., those with generalized smoothness properties or that are non-smooth and Lipschitz-continuous) using the appropriate choices of step lengths. Finally, we present experiments showing that our algorithm matches the performance of Nesterovs method, while appearing more robust to noise in some cases.


Mathematical Programming | 2018

Nearly linear-time packing and covering LP solvers

Zeyuan Allen-Zhu; Lorenzo Orecchia

Packing and covering linear programs (PC-LP s) form an important class of linear programs (LPs) across computer science, operations research, and optimization. Luby and Nisanxa0(in: STOC, ACM Press, New York, 1993) constructed an iterative algorithm for approximately solving PC-LP s in nearly linear time, where the time complexity scales nearly linearly in N, the number of nonzero entries of the matrix, and polynomially in


Journal of Machine Learning Research | 2012

A local spectral method for graphs: with applications to improving graph partitions and exploring data graphs locally

Michael W. Mahoney; Lorenzo Orecchia; Nisheeth K. Vishnoi


international conference on machine learning | 2011

Implementing regularization implicitly via approximate eigenvector computation

Lorenzo Orecchia; Michael W. Mahoney

varepsilon


symposium on discrete algorithms | 2011

Towards an SDP-based approach to spectral methods: a nearly-linear-time algorithm for graph partitioning and decomposition

Lorenzo Orecchia; Nisheeth K. Vishnoi

Collaboration


Dive into the Lorenzo Orecchia's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nisheeth K. Vishnoi

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zeyuan Allen Zhu

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Leonard J. Schulman

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Maryam Fazel

University of Washington

View shared research outputs
Researchain Logo
Decentralizing Knowledge