Soomin Lee
University of Illinois at Urbana–Champaign
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Soomin Lee.
IEEE Journal of Selected Topics in Signal Processing | 2013
Soomin Lee; Angelia Nedic
Random projection algorithm is of interest for constrained optimization when the constraint set is not known in advance or the projection operation on the whole constraint set is computationally prohibitive. This paper presents a distributed random projection algorithm for constrained convex optimization problems that can be used by multiple agents connected over a time-varying network, where each agent has its own objective function and its own constrained set. We prove that the iterates of all agents converge to the same point in the optimal set almost surely. Experiments on distributed support vector machines demonstrate good performance of the algorithm.
Siam Journal on Optimization | 2014
Angelia Nedic; Soomin Lee
This paper considers stochastic subgradient mirror-descent method for solving constrained convex minimization problems. In particular, a stochastic subgradient mirror-descent method with weighted iterate-averaging is investigated and its per-iterate convergence rate is analyzed. The novel part of the approach is in the choice of weights that are used to construct the averages. Through the use of these weighted averages, we show that the known optimal rates can be obtained with simpler algorithms than those currently existing in the literature. Specifically, by suitably choosing the stepsize values, one can obtain the rate of the order
IEEE Transactions on Automatic Control | 2016
Soomin Lee; Angelia Nedic
1/k
advances in computing and communications | 2015
Angelia Nedic; Soomin Lee; Maxim Raginsky
for strongly convex functions, and the rate
conference on decision and control | 2012
Soomin Lee; Angelia Nedic
1/\sqrt{k}
conference on decision and control | 2013
Soomin Lee; Angelia Nedic
for general convex functions (not necessarily differentiable). Furthermore, for the latter case, it is shown that a stochastic subgradient mirror-descent with iterate averaging converges (along a subsequence) to an optimal solution, almost surely, even with the stepsize of the form
international conference on tools with artificial intelligence | 2007
B.W. Wan; Soomin Lee
1/\sqrt{1+k}
conference on decision and control | 2016
Soomin Lee; Michael M. Zavlanos
, which was not p...
conference on decision and control | 2016
Soomin Lee; Alejandro Ribeiro; Michael M. Zavlanos
We consider a distributed constrained convex optimization problem over a multi-agent (no central coordinator) network. We propose a completely decentralized and asynchronous gossip-based random projection (GRP) algorithm that solves the distributed problem using only local communications and computations. We analyze the convergence properties of the algorithm for a diminishing and a constant stepsize which are uncoordinated among agents. For a diminishing stepsize, we prove that the iterates of all agents converge to the same optimal point with probability 1. For a constant stepsize, we establish an error bound on the expected distance from the iterates of the algorithm to the optimal point. We also provide simulation results on a distributed robust model predictive control problem.
ieee global conference on signal and information processing | 2013
Soomin Lee; Angelia Nedic
We consider a decentralized online convex optimization problem in a static undirected network of agents, where each agent controls only a coordinate (or a part) of the global decision vector. For such a problem, we propose a decentralized variant of Nesterovs primal-dual algorithm with dual averaging. To mitigate the disagreements on the primal-vector updates, the agents implement a generalization of the local information-exchange dynamics recently proposed by Li and Marden [1]. We show that the regret has sublinear growth of O (√T) with the time horizon T when the stepsize is of the form 1/√t and the objective functions are Lipschitz-continuous convex functions with Lipschitz gradients. We prove an analogous bound on the expected regret for the stochastic variant of the algorithm.