IEEE Transactions on Automatic Control | 2019

Fenchel Dual Gradient Methods for Distributed Convex Optimization Over Time-Varying Networks

 
 

Abstract


We develop a family of Fenchel dual gradient methods for solving constrained, strongly convex, but not necessarily smooth multi-agent optimization problems over time-varying networks. The proposed algorithms are constructed on the basis of weighted Fenchel dual gradients and can be implemented in a fully decentralized fashion. We show that the proposed algorithms drive all the agents to both primal and dual optimality at sublinear rates under a standard connectivity condition. Compared with the existing distributed optimization methods that also have convergence rate guarantees over time-varying networks, our algorithms are able to address constrained problems and have better scalability with respect to network size and time for reaching connectivity. The competent performance of the Fenchel dual gradient methods is demonstrated via simulations.

Volume 64
Pages 4629-4636
DOI 10.1109/TAC.2019.2901829
Language English
Journal IEEE Transactions on Automatic Control

Full Text