Elizabeth W. Karas
Federal University of Paraná
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Elizabeth W. Karas.
Siam Journal on Optimization | 2003
Clovis C. Gonzaga; Elizabeth W. Karas; Márcia Vanti
In this paper we present a filter algorithm for nonlinear programming and prove its global convergence to stationary points. Each iteration is composed of a feasibility phase, which reduces a measure of infeasibility, and an optimality phase, which reduces the objective function in a tangential approximation of the feasible set. These two phases are totally independent, and the only coupling between them is provided by the filter. The method is independent of the internal algorithms used in each iteration, as long as these algorithms satisfy reasonable assumptions on their efficiency. Under standard hypotheses, we show two results: for a filter with minimum size, the algorithm generates a stationary accumulation point; for a slightly larger filter, all accumulation points are stationary.
Mathematical Programming | 2008
Elizabeth W. Karas; Ademir A. Ribeiro; Claudia A. Sagastizábal; Mikhail V. Solodov
For solving nonsmooth convex constrained optimization problems, we propose an algorithm which combines the ideas of the proximal bundle methods with the filter strategy for evaluating candidate points. The resulting algorithm inherits some attractive features from both approaches. On the one hand, it allows effective control of the size of quadratic programming subproblems via the compression and aggregation techniques of proximal bundle methods. On the other hand, the filter criterion for accepting a candidate point as the new iterate is sometimes easier to satisfy than the usual descent condition in bundle methods. Some encouraging preliminary computational results are also reported.
Siam Journal on Optimization | 2008
Ademir A. Ribeiro; Elizabeth W. Karas; Clovis C. Gonzaga
We present a general filter algorithm that allows a great deal of freedom in the step computation. Each iteration of the algorithm consists basically in computing a point which is not forbidden by the filter, from the current point. We prove its global convergence, assuming that the step must be efficient, in the sense that, near a feasible nonstationary point, the reduction of the objective function is “large.” We show that this condition is reasonable, by presenting two classical ways of performing the step which satisfy it. In the first one, the step is obtained by the inexact restoration method of Martinez and Pilotta. In the second, the step is computed by sequential quadratic programming.
Mathematical Programming | 2013
Clovis C. Gonzaga; Elizabeth W. Karas
We modify the first order algorithm for convex programming described by Nesterov in his book (in Introductory lectures on convex optimization. A basic course. Kluwer, Boston, 2004). In his algorithms, Nesterov makes explicit use of a Lipschitz constant L for the function gradient, which is either assumed known (Nesterov in Introductory lectures on convex optimization. A basic course. Kluwer, Boston, 2004), or is estimated by an adaptive procedure (Nesterov 2007). We eliminate the use of L at the cost of an extra imprecise line search, and obtain an algorithm which keeps the optimal complexity properties and also inherit the global convergence properties of the steepest descent method for general continuously differentiable optimization. Besides this, we develop an adaptive procedure for estimating a strong convexity constant for the function. Numerical tests for a limited set of toy problems show an improvement in performance when compared with the original Nesterov’s algorithms.
Mathematical Programming | 2005
J. Charles Gilbert; Clovis C. Gonzaga; Elizabeth W. Karas
Abstract.This paper presents some examples of ill-behaved central paths in convex optimization. Some contain infinitely many fixed length central segments; others manifest oscillations with infinite variation. These central paths can be encountered even for infinitely differentiable data.
Applied Mathematics and Computation | 2008
Elizabeth W. Karas; Ana P. Oening; Ademir A. Ribeiro
In this paper, we present a general algorithm for nonlinear programming which uses a slanting filter criterion for accepting the new iterates. Independently of how these iterates are computed, we prove that all accumulation points of the sequence generated by the algorithm are feasible. Computing the new iterates by the inexact restoration method, we prove stationarity of all accumulation points of the sequence.
Computational Optimization and Applications | 2009
Elizabeth W. Karas; Elvio A. Pilotta; Ademir A. Ribeiro
Abstract Inexact Restoration methods have been introduced for solving nonlinear programming problems. Each iteration is composed of two phases. The first one reduces a measure of infeasibility, while in the second one the objective function value is reduced in a tangential approximation of the feasible set. The point obtained from the second phase is compared with the current point either by means of a merit function or by using a filter criterion. A comparative numerical study about these criteria by using a family of Hard-Spheres Problems is presented.
Optimization Methods & Software | 2015
P. D. Conejo; Elizabeth W. Karas; Lucas G. Pedroso
We propose a trust-region algorithm for constrained optimization problems in which the derivatives of the objective function are not available. In each iteration, the objective function is approximated by a model obtained by quadratic interpolation, which is then minimized within the intersection of the feasible set with the trust region. Since the constraints are handled in the trust-region subproblems, all the iterates are feasible even if some interpolation points are not. The rules for constructing and updating the quadratic model and the interpolation set use ideas from the BOBYQA software, a largely used algorithm for box-constrained problems. The subproblems are solved by ALGENCAN, a competitive implementation of an Augmented Lagrangian approach for general-constrained problems. Some numerical results for the Hock–Schittkowski collection are presented, followed by a performance comparison between our proposal and three derivative-free algorithms found in the literature.
Siam Journal on Optimization | 2013
Clovis C. Gonzaga; Elizabeth W. Karas; Diane R. Rossetto
We describe two algorithms for solving differentiable convex optimization problems constrained to simple sets in
Optimization | 2010
Elizabeth W. Karas; Clovis C. Gonzaga; Ademir A. Ribeiro
\mathbb{R}^n