Archive | 2021

Randomized Smoothing Variance Reduction Method for Large-Scale Non-smooth Convex Optimization

 
 

Abstract


We consider a new method for minimizing the average of a large number of non-smooth and convex functions. Such a problem often arises in typical machine learning problems, but is computationally challenging. We apply an implementable randomized smoothing method and propose a multistage scheme to progressively reduce the variance of the gradient estimator of the smoothed functions. Our algorithm achieves a linear convergence rate. Both its time complexity and gradient complexity are superior to the current standard algorithms for non-smooth minimization as well as subgradient-based algorithms. Besides, our algorithm works well without the error-bound condition on the minimizing sequence as well as the commonly imposed (but strong) smoothness and strongly convexity condition. We show that our algorithm has wide applications in optimization and machine learning problems. As an illustrative example, we demonstrate experimentally that our algorithm performs well on large-scale ranking problems and risk-aware portfolio optimization problems.

Volume 2
Pages 1-28
DOI 10.1007/S43069-021-00059-Y
Language English
Journal None

Full Text