Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Guanghui Lan is active.

Publication


Featured researches published by Guanghui Lan.


Siam Journal on Optimization | 2008

Robust Stochastic Approximation Approach to Stochastic Programming

Arkadii S. Nemirovski; Anatoli Juditsky; Guanghui Lan; Alexander Shapiro

In this paper we consider optimization problems where the objective function is given in a form of the expectation. A basic difficulty of solving such stochastic optimization problems is that the involved multidimensional integrals (expectations) cannot be computed with high accuracy. The aim of this paper is to compare two computational approaches based on Monte Carlo sampling techniques, namely, the stochastic approximation (SA) and the sample average approximation (SAA) methods. Both approaches, the SA and SAA methods, have a long history. Current opinion is that the SAA method can efficiently use a specific (say, linear) structure of the considered problem, while the SA approach is a crude subgradient method, which often performs poorly in practice. We intend to demonstrate that a properly modified SA approach can be competitive and even significantly outperform the SAA method for a certain class of convex stochastic problems. We extend the analysis to the case of convex-concave stochastic saddle point problems and present (in our opinion highly encouraging) results of numerical experiments.


Mathematical Programming | 2012

An optimal method for stochastic composite optimization

Guanghui Lan

This paper considers an important class of convex programming (CP) problems, namely, the stochastic composite optimization (SCO), whose objective function is given by the summation of general nonsmooth and smooth stochastic components. Since SCO covers non-smooth, smooth and stochastic CP as certain special cases, a valid lower bound on the rate of convergence for solving these problems is known from the classic complexity theory of convex programming. Note however that the optimization algorithms that can achieve this lower bound had never been developed. In this paper, we show that the simple mirror-descent stochastic approximation method exhibits the best-known rate of convergence for solving these problems. Our major contribution is to introduce the accelerated stochastic approximation (AC-SA) algorithm based on Nesterov’s optimal method for smooth CP (Nesterov in Doklady AN SSSR 269:543–547, 1983; Nesterov in Math Program 103:127–152, 2005), and show that the AC-SA algorithm can achieve the aforementioned lower bound on the rate of convergence for SCO. To the best of our knowledge, it is also the first universally optimal algorithm in the literature for solving non-smooth, smooth and stochastic CP problems. We illustrate the significant advantages of the AC-SA algorithm over existing methods in the context of solving a special but broad class of stochastic programming problems.


Mathematical Programming | 2016

Accelerated gradient methods for nonconvex nonlinear and stochastic programming

Saeed Ghadimi; Guanghui Lan

In this paper, we generalize the well-known Nesterov’s accelerated gradient (AG) method, originally designed for convex smooth optimization, to solve nonconvex and possibly stochastic optimization problems. We demonstrate that by properly specifying the stepsize policy, the AG method exhibits the best known rate of convergence for solving general nonconvex smooth optimization problems by using first-order information, similarly to the gradient descent method. We then consider an important class of composite optimization problems and show that the AG method can solve them uniformly, i.e., by using the same aggressive stepsize policy as in the convex case, even if the problem turns out to be nonconvex. We demonstrate that the AG method exhibits an optimal rate of convergence if the composite problem is convex, and improves the best known rate of convergence if the problem is nonconvex. Based on the AG method, we also present new nonconvex stochastic approximation methods and show that they can improve a few existing rates of convergence for nonconvex stochastic optimization. To the best of our knowledge, this is the first time that the convergence of the AG method has been established for solving nonconvex nonlinear programming in the literature.


Siam Journal on Optimization | 2012

Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: A Generic Algorithmic Framework

Saeed Ghadimi; Guanghui Lan

In this paper we present a generic algorithmic framework, namely, the accelerated stochastic approximation (AC-SA) algorithm, for solving strongly convex stochastic composite optimization (SCO) problems. While the classical stochastic approximation algorithms are asymptotically optimal for solving differentiable and strongly convex problems, the AC-SA algorithm, when employed with proper stepsize policies, can achieve optimal or nearly optimal rates of convergence for solving different classes of SCO problems during a given number of iterations. Moreover, we investigate these AC-SA algorithms in more detail, such as by establishing the large-deviation results associated with the convergence rates and introducing an efficient validation procedure to check the accuracy of the generated solutions.


Siam Journal on Imaging Sciences | 2015

An Accelerated Linearized Alternating Direction Method of Multipliers

Yuyuan Ouyang; Yunmei Chen; Guanghui Lan; Eduardo Pasiliao

We present a novel framework, namely, accelerated alternating direction method of multipliers (AADMM), for acceleration of linearized ADMM. The basic idea of AADMM is to incorporate a multistep acceleration scheme into linearized ADMM. We demonstrate that for solving a class of convex composite optimization with linear constraints, the rate of convergence of AADMM is better than that of linearized ADMM, in terms of their dependence on the Lipschitz constant of the smooth component. Moreover, AADMM is capable of dealing with the situation when the feasible region is unbounded, as long as the corresponding saddle point problem has a solution. A backtracking algorithm is also proposed for practical performance.


Siam Journal on Optimization | 2013

Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming

Saeed Ghadimi; Guanghui Lan

In this paper, we introduce a new stochastic approximation (SA) type algorithm, namely the randomized stochastic gradient (RSG) method, for solving an important class of nonlinear (possibly nonconvex) stochastic programming (SP) problems. We establish the complexity of this method for computing an approximate stationary point of a nonlinear programming problem. We also show that this method possesses a nearly optimal rate of convergence if the problem is convex. We discuss a variant of the algorithm which consists of applying a post-optimization phase to evaluate a short list of solutions generated by several independent runs of the RSG method, and show that such modification allows to improve significantly the large-deviation properties of the algorithm. These methods are then specialized for solving a class of simulation-based optimization problems in which only stochastic zeroth-order information is available.


Mathematical Programming | 2016

Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization

Saeed Ghadimi; Guanghui Lan; Hongchao Zhang

This paper considers a class of constrained stochastic composite optimization problems whose objective function is given by the summation of a differentiable (possibly nonconvex) component, together with a certain non-differentiable (but convex) component. In order to solve these problems, we propose a randomized stochastic projected gradient (RSPG) algorithm, in which proper mini-batch of samples are taken at each iteration depending on the total budget of stochastic samples allowed. The RSPG algorithm also employs a general distance function to allow taking advantage of the geometry of the feasible region. Complexity of this algorithm is established in a unified setting, which shows nearly optimal complexity of the algorithm for convex stochastic programming. A post-optimization phase is also proposed to significantly reduce the variance of the solutions returned by the algorithm. In addition, based on the RSPG algorithm, a stochastic gradient free algorithm, which only uses the stochastic zeroth-order information, has been also discussed. Some preliminary numerical results are also provided.


Siam Journal on Optimization | 2013

Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization, II: Shrinking Procedures and Optimal Algorithms

Saeed Ghadimi; Guanghui Lan

In this paper we study new stochastic approximation (SA) type algorithms, namely, the accelerated SA (AC-SA), for solving strongly convex stochastic composite optimization (SCO) problems. Specifically, by introducing a domain shrinking procedure, we significantly improve the large-deviation results associated with the convergence rate of a nearly optimal AC-SA algorithm presented by Ghadimi and Lan in [SIAM J. Optim., 22 (2012), pp 1469--1492]. Moreover, we introduce a multistage AC-SA algorithm, which possesses an optimal rate of convergence for solving strongly convex SCO problems in terms of the dependence on not only the target accuracy, but also a number of problem parameters and the selection of initial points. To the best of our knowledge, this is the first time that such an optimal method has been presented in the literature. From our computational results, these AC-SA algorithms can substantially outperform the classical SA and some other SA type algorithms for solving certain classes of strongly...


Siam Journal on Optimization | 2014

Optimal Primal-Dual Methods for a Class of Saddle Point Problems

Yunmei Chen; Guanghui Lan; Yuyuan Ouyang

We present a novel accelerated primal-dual (APD) method for solving a class of deterministic and stochastic saddle point problems (SPPs). The basic idea of this algorithm is to incorporate a multistep acceleration scheme into the primal-dual method without smoothing the objective function. For deterministic SPP, the APD method achieves the same optimal rate of convergence as Nesterovs smoothing technique. Our stochastic APD method exhibits an optimal rate of convergence for stochastic SPP not only in terms of its dependence on the number of the iteration, but also on a variety of problem parameters. To the best of our knowledge, this is the first time that such an optimal algorithm has been developed for stochastic SPP in the literature. Furthermore, for both deterministic and stochastic SPP, the developed APD algorithms can deal with the situation when the feasible region is unbounded, as long as a saddle point exists. In the unbounded case, we incorporate the modified termination criterion introduced b...


Mathematical Programming | 2012

Validation analysis of mirror descent stochastic approximation method

Guanghui Lan; Arkadi Nemirovski; Alexander Shapiro

The main goal of this paper is to develop accuracy estimates for stochastic programming problems by employing stochastic approximation (SA) type algorithms. To this end we show that while running a Mirror Descent Stochastic Approximation procedure one can compute, with a small additional effort, lower and upper statistical bounds for the optimal objective value. We demonstrate that for a certain class of convex stochastic programs these bounds are comparable in quality with similar bounds computed by the sample average approximation method, while their computational cost is considerably smaller.

Collaboration


Dive into the Guanghui Lan's collaboration.

Top Co-Authors

Avatar

Renato D. C. Monteiro

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yi Zhou

University of Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zhaosong Lu

Simon Fraser University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gail W. DePuy

University of Louisville

View shared research outputs
Top Co-Authors

Avatar

Alexander Shapiro

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge