Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John C. Duchi is active.

Publication


Featured researches published by John C. Duchi.


IEEE Transactions on Automatic Control | 2012

Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling

John C. Duchi; Alekh Agarwal; Martin J. Wainwright

The goal of decentralized optimization over a network is to optimize a global objective formed by a sum of local (possibly nonsmooth) convex functions using only local computation and communication. It arises in various application domains, including distributed tracking and localization, multi-agent coordination, estimation in sensor networks, and large-scale machine learning. We develop and analyze distributed algorithms based on dual subgradient averaging, and we provide sharp bounds on their convergence rates as a function of the network size and topology. Our analysis allows us to clearly separate the convergence of the optimization algorithm itself and the effects of communication dependent on the network structure. We show that the number of iterations required by our algorithm scales inversely in the spectral gap of the network, and confirm this predictions sharpness both by theoretical lower bounds and simulations for various networks. Our approach includes the cases of deterministic optimization and communication, as well as problems with stochastic optimization and/or communication.


conference on decision and control | 2012

Distributed delayed stochastic optimization

Alekh Agarwal; John C. Duchi

We analyze the convergence of gradient-based optimization algorithms that base their updates on delayed stochastic gradient information. The main application of our results is to gradient-based distributed optimization algorithms where a master node performs parameter updates while worker nodes compute stochastic gradients based on local information in parallel, which may give rise to delays due to asynchrony. We take motivation from statistical problems where the size of the data is so large that it cannot fit on one computer; with the advent of huge datasets in biology, astronomy, and the internet, such problems are now common. Our main contribution is to show that for smooth stochastic problems, the delays are asymptotically negligible and we can achieve order-optimal convergence results. We show n-node architectures whose optimization error in stochastic problems-in spite of asynchronous delays-scales asymptotically as O(1/√nT) after T iterations. This rate is known to be optimal for a distributed system with n nodes even in the absence of delays. We additionally complement our theoretical results with numerical experiments on a logistic regression task.


international conference on machine learning | 2009

Boosting with structural sparsity

John C. Duchi; Yoram Singer

We derive generalizations of AdaBoost and related gradient-based coordinate descent methods that incorporate sparsity-promoting penalties for the norm of the predictor that is being learned. The end result is a family of coordinate descent algorithms that integrate forward feature induction and back-pruning through regularization and give an automatic stopping criterion for feature induction. We study penalties based on the l1, l2, and l∞ norms of the predictor and introduce mixed-norm penalties that build upon the initial penalties. The mixed-norm regularizers facilitate structural sparsity in parameter space, which is a useful property in multiclass prediction and other related tasks. We report empirical results that demonstrate the power of our approach in building accurate and structurally sparse models.


Siam Journal on Optimization | 2012

Randomized Smoothing for Stochastic Optimization

John C. Duchi; Peter L. Bartlett; Martin J. Wainwright

We analyze convergence rates of stochastic optimization procedures for non-smooth convex optimization problems. By combining randomized smoothing techniques with accelerated gradient methods, we obtain convergence rates of stochastic optimization procedures, both in expectation and with high probability, that have optimal dependence on the variance of the gradient estimates. To the best of our knowledge, these are the first variance-based rates for non-smooth optimization. We give several applications of our results to statistical estimation problems, and provide experimental results that demonstrate the effectiveness of the proposed algorithms. We also describe how a combination of our algorithm with recent work on decentralized optimization yields a distributed stochastic optimization algorithm that is order-optimal.


Siam Journal on Optimization | 2018

Accelerated Methods for NonConvex Optimization

Yair Carmon; John C. Duchi; Oliver Hinder; Aaron Sidford

We present an accelerated gradient method for nonconvex optimization problems with Lipschitz continuous first and second derivatives. In a time


IEEE Transactions on Information Theory | 2015

Optimal Rates for Zero-Order Convex Optimization: The Power of Two Function Evaluations

John C. Duchi; Michael I. Jordan; Martin J. Wainwright; Andre Wibisono

O(\epsilon^{-7/4} \log(1/ \epsilon) )


IEEE Transactions on Information Theory | 2013

The Generalization Ability of Online Algorithms for Dependent Data

Alekh Agarwal; John C. Duchi

, the method finds an


Annals of Statistics | 2013

The asymptotics of ranking algorithms

John C. Duchi; Lester W. Mackey; Michael I. Jordan

\epsilon


allerton conference on communication, control, and computing | 2012

Dual averaging for distributed optimization

John C. Duchi; Alekh Agarwal; Martin J. Wainwright

-stationary point, meaning a point


Journal of the American Statistical Association | 2018

Minimax Optimal Procedures for Locally Private Estimation

John C. Duchi; Michael I. Jordan; Martin J. Wainwright

x

Collaboration


Dive into the John C. Duchi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yuchen Zhang

University of California

View shared research outputs
Top Co-Authors

Avatar

Yair Carmon

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge