Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ellen H. Fukuda is active.

Publication


Featured researches published by Ellen H. Fukuda.


Computational Optimization and Applications | 2013

Inexact projected gradient method for vector optimization

Ellen H. Fukuda; L. M. Graña Drummond

In this work, we propose an inexact projected gradient-like method for solving smooth constrained vector optimization problems. In the unconstrained case, we retrieve the steepest descent method introduced by Graña Drummond and Svaiter. In the constrained setting, the method we present extends the exact one proposed by Graña Drummond and Iusem, since it admits relative errors on the search directions. At each iteration, a decrease of the objective value is obtained by means of an Armijo-like rule. The convergence results of this new method extend those obtained by Fukuda and Graña Drummond for the exact version. For partial orders induced by both pointed and nonpointed cones, under some reasonable hypotheses, global convergence to weakly efficient points of all sequences generated by the inexact projected gradient method is established for convex (respect to the ordering cone) objective functions. In the convergence analysis we also establish a connection between the so-called weighting method and the one we propose.


Optimization | 2011

On the convergence of the projected gradient method for vector optimization

Ellen H. Fukuda; L. M. Graña Drummond

In 2004, Graña Drummond and Iusem proposed an extension of the projected gradient method for constrained vector optimization problems. Using this method, an Armijo-like rule, implemented with a backtracking procedure, was used in order to determine the step lengths. The authors just showed stationarity of all cluster points and, for another version of the algorithm (with exogenous step lengths), under some additional assumptions, they proved convergence to weakly efficient solutions. In this work, first we correct a slight mistake in the proof of a certain continuity result in that 2004 article, and then we extend its convergence analysis. Indeed, under some reasonable hypotheses, for convex objective functions with respect to the ordering cone, we establish full convergence to optimal points of any sequence produced by the projected gradient method with an Armijo-like rule, no matter how poor the initial guesses may be.


Siam Journal on Optimization | 2012

Differentiable Exact Penalty Functions for Nonlinear Second-Order Cone Programs

Ellen H. Fukuda; Paulo J. S. Silva; Masao Fukushima

We propose a method for solving nonlinear second-order cone programs (SOCPs), based on a continuously differentiable exact penalty function. The construction of the penalty function is given by incorporating a multipliers estimate in the augmented Lagrangian for SOCPs. Under the nondegeneracy assumption and the strong second-order sufficient condition, we show that a generalized Newton method has global and superlinear convergence. We also present some preliminary numerical experiments.


Pesquisa Operacional | 2014

A SURVEY ON MULTIOBJECTIVE DESCENT METHODS

Ellen H. Fukuda; Luis Mauricio Graña Drummond

We present a rigorous and comprehensive survey on extensions to the multicriteria setting of three well-known scalar optimization algorithms. Multiobjective versions of the steepest descent, the projected gradient and the Newton methods are analyzed in detail. At each iteration, the search directions of these methods are computed by solving real-valued optimization problems and, in order to guarantee an adequate objective value decrease, Armijo-like rules are implemented by means of a backtracking procedure. Under standard assumptions, convergence to Pareto (weak Pareto) optima is established. For the Newton method, superlinear convergence is proved and, assuming Lipschitz continuity of the objectives second derivatives, it is shown that the rate is quadratic


Journal of Optimization Theory and Applications | 2013

A Gauss-Newton Approach for Solving Constrained Optimization Problems Using Differentiable Exact Penalties

Roberto Andreani; Ellen H. Fukuda; Paulo J. S. Silva

We propose a Gauss–Newton-type method for nonlinear constrained optimization using the exact penalty introduced recently by André and Silva for variational inequalities. We extend their penalty function to both equality and inequality constraints using a weak regularity assumption, and as a result, we obtain a continuously differentiable exact penalty function and a new reformulation of the KKT conditions as a system of equations. Such reformulation allows the use of a semismooth Newton method, so that local superlinear convergence rate can be proved under an assumption weaker than the usual strong second-order sufficient condition and without requiring strict complementarity. Besides, we note that the exact penalty function can be used to globalize the method. We conclude with some numerical experiments using the collection of test problems CUTE.


Journal of Optimization Theory and Applications | 2016

The Use of Squared Slack Variables in Nonlinear Second-Order Cone Programming

Ellen H. Fukuda; Masao Fukushima

In traditional nonlinear programming, the technique of converting a problem with inequality constraints into a problem containing only equality constraints, by the addition of squared slack variables, is well known. Unfortunately, it is considered to be an avoided technique in the optimization community, since the advantages usually do not compensate for the disadvantages, like the increase in the dimension of the problem, the numerical instabilities, and the singularities. However, in the context of nonlinear second-order cone programming, the situation changes, because the reformulated problem with squared slack variables has no longer conic constraints. This fact allows us to solve the problem by using a general-purpose nonlinear programming solver. The objective of this work is to establish the relation between Karush–Kuhn–Tucker points of the original and the reformulated problems by means of the second-order sufficient conditions and regularity conditions. We also present some preliminary numerical experiments.


Computational Optimization and Applications | 2018

Exact augmented Lagrangian functions for nonlinear semidefinite programming

Ellen H. Fukuda; Bruno F. Lourenço

In this paper, we study augmented Lagrangian functions for nonlinear semidefinite programming (NSDP) problems with exactness properties. The term exact is used in the sense that the penalty parameter can be taken appropriately, so a single minimization of the augmented Lagrangian recovers a solution of the original problem. This leads to reformulations of NSDP problems into unconstrained nonlinear programming ones. Here, we first establish a unified framework for constructing these exact functions, generalizing Di Pillo and Lucidi’s work from 1996, that was aimed at solving nonlinear programming problems. Then, through our framework, we propose a practical augmented Lagrangian function for NSDP, proving that it is continuously differentiable and exact under the so-called nondegeneracy condition. We also present some preliminary numerical experiments.


Mathematical Programming | 2018

Optimality conditions for nonlinear semidefinite programming via squared slack variables

Bruno F. Lourenço; Ellen H. Fukuda; Masao Fukushima


Top | 2016

An external penalty-type method for multicriteria

Ellen H. Fukuda; L. M. Graña Drummond; Fernanda M. P. Raupp


Journal of The Operations Research Society of Japan | 2017

A NOTE ON THE SQUARED SLACK VARIABLES TECHNIQUE FOR NONLINEAR OPTIMIZATION

Ellen H. Fukuda; Masao Fukushima

Collaboration


Dive into the Ellen H. Fukuda's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

L. M. Graña Drummond

Federal University of Rio de Janeiro

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Roberto Andreani

State University of Campinas

View shared research outputs
Researchain Logo
Decentralizing Knowledge