Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Saeed Ghadimi is active.

Publication


Featured researches published by Saeed Ghadimi.


Mathematical Programming | 2016

Accelerated gradient methods for nonconvex nonlinear and stochastic programming

Saeed Ghadimi; Guanghui Lan

In this paper, we generalize the well-known Nesterov’s accelerated gradient (AG) method, originally designed for convex smooth optimization, to solve nonconvex and possibly stochastic optimization problems. We demonstrate that by properly specifying the stepsize policy, the AG method exhibits the best known rate of convergence for solving general nonconvex smooth optimization problems by using first-order information, similarly to the gradient descent method. We then consider an important class of composite optimization problems and show that the AG method can solve them uniformly, i.e., by using the same aggressive stepsize policy as in the convex case, even if the problem turns out to be nonconvex. We demonstrate that the AG method exhibits an optimal rate of convergence if the composite problem is convex, and improves the best known rate of convergence if the problem is nonconvex. Based on the AG method, we also present new nonconvex stochastic approximation methods and show that they can improve a few existing rates of convergence for nonconvex stochastic optimization. To the best of our knowledge, this is the first time that the convergence of the AG method has been established for solving nonconvex nonlinear programming in the literature.


Siam Journal on Optimization | 2012

Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: A Generic Algorithmic Framework

Saeed Ghadimi; Guanghui Lan

In this paper we present a generic algorithmic framework, namely, the accelerated stochastic approximation (AC-SA) algorithm, for solving strongly convex stochastic composite optimization (SCO) problems. While the classical stochastic approximation algorithms are asymptotically optimal for solving differentiable and strongly convex problems, the AC-SA algorithm, when employed with proper stepsize policies, can achieve optimal or nearly optimal rates of convergence for solving different classes of SCO problems during a given number of iterations. Moreover, we investigate these AC-SA algorithms in more detail, such as by establishing the large-deviation results associated with the convergence rates and introducing an efficient validation procedure to check the accuracy of the generated solutions.


Siam Journal on Optimization | 2013

Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming

Saeed Ghadimi; Guanghui Lan

In this paper, we introduce a new stochastic approximation (SA) type algorithm, namely the randomized stochastic gradient (RSG) method, for solving an important class of nonlinear (possibly nonconvex) stochastic programming (SP) problems. We establish the complexity of this method for computing an approximate stationary point of a nonlinear programming problem. We also show that this method possesses a nearly optimal rate of convergence if the problem is convex. We discuss a variant of the algorithm which consists of applying a post-optimization phase to evaluate a short list of solutions generated by several independent runs of the RSG method, and show that such modification allows to improve significantly the large-deviation properties of the algorithm. These methods are then specialized for solving a class of simulation-based optimization problems in which only stochastic zeroth-order information is available.


Mathematical Programming | 2016

Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization

Saeed Ghadimi; Guanghui Lan; Hongchao Zhang

This paper considers a class of constrained stochastic composite optimization problems whose objective function is given by the summation of a differentiable (possibly nonconvex) component, together with a certain non-differentiable (but convex) component. In order to solve these problems, we propose a randomized stochastic projected gradient (RSPG) algorithm, in which proper mini-batch of samples are taken at each iteration depending on the total budget of stochastic samples allowed. The RSPG algorithm also employs a general distance function to allow taking advantage of the geometry of the feasible region. Complexity of this algorithm is established in a unified setting, which shows nearly optimal complexity of the algorithm for convex stochastic programming. A post-optimization phase is also proposed to significantly reduce the variance of the solutions returned by the algorithm. In addition, based on the RSPG algorithm, a stochastic gradient free algorithm, which only uses the stochastic zeroth-order information, has been also discussed. Some preliminary numerical results are also provided.


Siam Journal on Optimization | 2013

Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization, II: Shrinking Procedures and Optimal Algorithms

Saeed Ghadimi; Guanghui Lan

In this paper we study new stochastic approximation (SA) type algorithms, namely, the accelerated SA (AC-SA), for solving strongly convex stochastic composite optimization (SCO) problems. Specifically, by introducing a domain shrinking procedure, we significantly improve the large-deviation results associated with the convergence rate of a nearly optimal AC-SA algorithm presented by Ghadimi and Lan in [SIAM J. Optim., 22 (2012), pp 1469--1492]. Moreover, we introduce a multistage AC-SA algorithm, which possesses an optimal rate of convergence for solving strongly convex SCO problems in terms of the dependence on not only the target accuracy, but also a number of problem parameters and the selection of initial points. To the best of our knowledge, this is the first time that such an optimal method has been presented in the literature. From our computational results, these AC-SA algorithms can substantially outperform the classical SA and some other SA type algorithms for solving certain classes of strongly...


Journal of the Operational Research Society | 2015

The single facility location problem with time-dependent weights and relocation cost over a continuous time horizon

Reza Zanjirani Farahani; W.Y. Szeto; Saeed Ghadimi

In this study, we investigate the problem of locating a facility in continuous space when the weight of each existing facility is a known linear function of time. The location of the new facility can be changed once over a continuous finite time horizon. Rectilinear distance and time- and location-dependent relocation costs are considered. The objective is to determine the optimal relocation time and locations of the new facility before and after relocation to minimize the total location and relocation costs. We also propose an exact algorithm to solve the problem in a polynomial time according to our computational results.


Mathematical Programming | 2018

Conditional gradient type methods for composite nonlinear and stochastic optimization

Saeed Ghadimi

In this paper, we present a conditional gradient type (CGT) method for solving a class of composite optimization problems where the objective function consists of a (weakly) smooth term and a (strongly) convex regularization term. While including a strongly convex term in the subproblems of the classical conditional gradient method improves its rate of convergence, it does not cost per iteration as much as general proximal type algorithms. More specifically, we present a unified analysis for the CGT method in the sense that it achieves the best known rate of convergence when the weakly smooth term is nonconvex and possesses (nearly) optimal complexity if it turns out to be convex. While implementation of the CGT method requires explicitly estimating problem parameters like the level of smoothness of the first term in the objective function, we also present a few variants of this method which relax such estimation. Unlike general proximal type parameter free methods, these variants of the CGT method do not require any additional effort for computing (sub)gradients of the objective function and/or solving extra subproblems at each iteration. We then generalize these methods under stochastic setting and present a few new complexity results. To the best of our knowledge, this is the first time that such complexity results are presented for solving stochastic weakly smooth nonconvex and (strongly) convex optimization problems.


Archive | 2015

Stochastic Approximation Methods and Their Finite-Time Convergence Properties

Saeed Ghadimi; Guanghui Lan

This chapter surveys some recent advances in the design and analysis of two classes of stochastic approximation methods: stochastic first- and zeroth-order methods for stochastic optimization. We focus on the finite-time convergence properties (i.e., iteration complexity) of these algorithms by providing bounds on the number of iterations required to achieve a certain accuracy. We point out that many of these complexity bounds are theoretically optimal for solving different classes of stochastic optimization problems.


Ima Journal of Management Mathematics | 2013

Coordination of advertising in supply chain management with cooperating manufacturer and retailers

Saeed Ghadimi; Ferenc Szidarovszky; Reza Zanjirani Farahani; Alireza Yousefzadeh Khiabani


arXiv: Optimization and Control | 2015

Generalized Uniformly Optimal Methods for Nonlinear Programming

Saeed Ghadimi; Guanghui Lan; Hongchao Zhang

Collaboration


Dive into the Saeed Ghadimi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hongchao Zhang

Louisiana State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

W.Y. Szeto

University of Hong Kong

View shared research outputs
Researchain Logo
Decentralizing Knowledge