Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where L. M. Graña Drummond is active.

Publication


Featured researches published by L. M. Graña Drummond.


Siam Journal on Optimization | 2009

Newton's Method for Multiobjective Optimization

Jörg Fliege; L. M. Graña Drummond; B. F. Svaiter

We propose an extension of Newtons method for unconstrained multiobjective optimization (multicriteria optimization). This method does not use a priori chosen weighting factors or any other form of a priori ranking or ordering information for the different objective functions. Newtons direction at each iterate is obtained by minimizing the max-ordering scalarization of the variations on the quadratic approximations of the objective functions. The objective functions are assumed to be twice continuously differentiable and locally strongly convex. Under these hypotheses, the method, as in the classical case, is locally superlinear convergent to optimal points. Again as in the scalar case, if the second derivatives are Lipschitz continuous, the rate of convergence is quadratic. Our convergence analysis uses a Kantorovich-like technique. As a byproduct, existence of optima is obtained under semilocal assumptions.


Optimization | 1995

Full convergence of the steepest descent method with inexact line searches

Regina S. Burachik; L. M. Graña Drummond; Alfredo N. Iusem; B. F. Svaiter

Several finite procedures for determining the step size of the steepest descent method for unconstrained optimization, without performing exact one-dimensional minimizations, have been considered in the literature. The convergence analysis of these methods requires that the objective function have bounded level sets and that its gradient satisfy a Lipschitz condition, in order to establish just stationarity of all cluster points. We consider two of such procedures and prove, for a convex objective, convergence of the whole sequence to a minimizer without any level set boundedness assumption and, for one of them, without any Lipschitz condition.


Computational Optimization and Applications | 2004

A Projected Gradient Method for Vector Optimization Problems

L. M. Graña Drummond; Alfredo N. Iusem

Vector optimization problems are a significant extension of multiobjective optimization, which has a large number of real life applications. In vector optimization the preference order is related to an arbitrary closed and convex cone, rather than the nonnegative orthant. We consider extensions of the projected gradient gradient method to vector optimization, which work directly with vector-valued functions, without using scalar-valued objectives. We provide a direction which adequately substitutes for the projected gradient, and establish results which mirror those available for the scalar-valued case, namely stationarity of the cluster points (if any) without convexity assumptions, and convergence of the full sequence generated by the algorithm to a weakly efficient optimum in the convex case, under mild assumptions. We also prove that our results still hold when the search direction is only approximately computed.


Computational Optimization and Applications | 2013

Inexact projected gradient method for vector optimization

Ellen H. Fukuda; L. M. Graña Drummond

In this work, we propose an inexact projected gradient-like method for solving smooth constrained vector optimization problems. In the unconstrained case, we retrieve the steepest descent method introduced by Graña Drummond and Svaiter. In the constrained setting, the method we present extends the exact one proposed by Graña Drummond and Iusem, since it admits relative errors on the search directions. At each iteration, a decrease of the objective value is obtained by means of an Armijo-like rule. The convergence results of this new method extend those obtained by Fukuda and Graña Drummond for the exact version. For partial orders induced by both pointed and nonpointed cones, under some reasonable hypotheses, global convergence to weakly efficient points of all sequences generated by the inexact projected gradient method is established for convex (respect to the ordering cone) objective functions. In the convergence analysis we also establish a connection between the so-called weighting method and the one we propose.


Optimization | 2011

On the convergence of the projected gradient method for vector optimization

Ellen H. Fukuda; L. M. Graña Drummond

In 2004, Graña Drummond and Iusem proposed an extension of the projected gradient method for constrained vector optimization problems. Using this method, an Armijo-like rule, implemented with a backtracking procedure, was used in order to determine the step lengths. The authors just showed stationarity of all cluster points and, for another version of the algorithm (with exogenous step lengths), under some additional assumptions, they proved convergence to weakly efficient solutions. In this work, first we correct a slight mistake in the proof of a certain continuity result in that 2004 article, and then we extend its convergence analysis. Indeed, under some reasonable hypotheses, for convex objective functions with respect to the ordering cone, we establish full convergence to optimal points of any sequence produced by the projected gradient method with an Armijo-like rule, no matter how poor the initial guesses may be.


Optimization | 2014

A quadratically convergent Newton method for vector optimization

L. M. Graña Drummond; Fernanda M. P. Raupp; B. F. Svaiter

We propose a Newton method for solving smooth unconstrained vector optimization problems under partial orders induced by general closed convex pointed cones. The method extends the one proposed by Fliege, Graña Drummond and Svaiter for multicriteria, which in turn is an extension of the classical Newton method for scalar optimization. The steplength is chosen by means of an Armijo-like rule, guaranteeing an objective value decrease at each iteration. Under standard assumptions, we establish superlinear convergence to an efficient point. Additionally, as in the scalar case, assuming Lipschitz continuity of the second derivative of the objective vector-valued function, we prove q-quadratic convergence.


Optimization | 2002

THE CENTRAL PATH IN SMOOTH CONVEX SEMIDEFINITE PROGRAMS

L. M. Graña Drummond; Ya'acov Peterzil

Abstract In this paper we study the welldefinedness of the central path associated to a nonlinear convex semidefinite programming problem with smooth objective and constraint functions. Under standard assumptions, we prove that the existence of the central path is equivalent to the nonemptiness and boundedness of the optimal set. Other equivalent conditions are given, such as the existence of a strictly dual feasible point or the existence of a single central point. The monotonic behavior of the primal and dual logarithmic barriers and of the primal and dual objective functions along the trajectory is also discussed. The existence and optimality of cluster points is established and finally, under the additional assumption of analyticity of the data functions, the convergence of the primal-dual trajectory is proved.


Journal of Optimization Theory and Applications | 1999

On well definedness of the central path

L. M. Graña Drummond; B. F. Svaiter

We study the well definedness of the central path for a linearly constrained convex programming problem with smooth objective function. We prove that, under standard assumptions, existence of the central path is equivalent to the nonemptiness and boundedness of the optimal set. Other equivalent conditions are given. We show that, under an additional assumption on the objective function, the central path converges to the analytic center of the optimal set.


Mathematical Programming | 2007

On some properties and an application of the logarithmic barrier method

Regina S. Burachik; L. M. Graña Drummond; Susana Scheimberg

We analyze the logarithmic barrier method for nonsmooth convex optimization in the setting of point-to-set theory. This general framework allows us to both extend and include classical results. We also propose an application for finding efficient points of nonsmooth constrained convex vector-valued problems.


Optimization | 2017

On the choice of special Pareto points

L. M. Graña Drummond

AbstractWe propose two strategies for choosing Pareto solutions of constrained multiobjective optimization problems. The first one, for general problems, furnishes balanced optima, i.e. feasible points that, in some sense, have the closest image to the vector whose coordinates are the objective components infima. It consists of solving a single scalar-valued problem, whose objective requires the use of a monotonic function which can be chosen within a large class of functions. The second one, for practical problems for which there is a preference among the objective’s components to be minimized, gives us points that satisfy this order criterion. The procedure requires the sequential minimization of all these functions. We also study other special Pareto solutions, the sub-balanced points, which are a generalization of the balanced optima.Abstract We propose two strategies for choosing Pareto solutions of constrained multiobjective optimization problems. The first one, for general problems, furnishes balanced optima, i.e. feasible points that, in some sense, have the closest image to the vector whose coordinates are the objective components infima. It consists of solving a single scalar-valued problem, whose objective requires the use of a monotonic function which can be chosen within a large class of functions. The second one, for practical problems for which there is a preference among the objective’s components to be minimized, gives us points that satisfy this order criterion. The procedure requires the sequential minimization of all these functions. We also study other special Pareto solutions, the sub-balanced points, which are a generalization of the balanced optima.

Collaboration


Dive into the L. M. Graña Drummond's collaboration.

Top Co-Authors

Avatar

B. F. Svaiter

Instituto Nacional de Matemática Pura e Aplicada

View shared research outputs
Top Co-Authors

Avatar

Alfredo N. Iusem

Instituto Nacional de Matemática Pura e Aplicada

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Regina S. Burachik

University of South Australia

View shared research outputs
Top Co-Authors

Avatar

Susana Scheimberg

Federal University of Rio de Janeiro

View shared research outputs
Top Co-Authors

Avatar

Jörg Fliege

University of Southampton

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge