Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gabriel Haeser is active.

Publication


Featured researches published by Gabriel Haeser.


Optimization | 2011

On sequential optimality conditions for smooth constrained optimization

Roberto Andreani; Gabriel Haeser; José Mario Martínez

Sequential optimality conditions provide adequate theoretical tools to justify stopping criteria for nonlinear programming solvers. Approximate Karush–Kuhn–Tucker and approximate gradient projection conditions are analysed in this work. These conditions are not necessarily equivalent. Implications between different conditions and counter-examples will be shown. Algorithmic consequences will be discussed.


Mathematical Programming | 2012

A relaxed constant positive linear dependence constraint qualification and applications

Roberto Andreani; Gabriel Haeser; María Laura Schuverdt; Paulo J. S. Silva

In this work we introduce a relaxed version of the constant positive linear dependence constraint qualification (CPLD) that we call RCPLD. This development is inspired by a recent generalization of the constant rank constraint qualification by Minchenko and Stakhovski that was called RCRCQ. We show that RCPLD is enough to ensure the convergence of an augmented Lagrangian algorithm and that it asserts the validity of an error bound. We also provide proofs and counter-examples that show the relations of RCRCQ and RCPLD with other known constraint qualifications. In particular, RCPLD is strictly weaker than CPLD and RCRCQ, while still stronger than Abadie’s constraint qualification. We also verify that the second order necessary optimality condition holds under RCRCQ.


Siam Journal on Optimization | 2012

Two New Weak Constraint Qualifications and Applications

Roberto Andreani; Gabriel Haeser; María Laura Schuverdt; Paulo J. S. Silva

We present two new constraint qualifications (CQs) that are weaker than the recently introduced relaxed constant positive linear dependence (RCPLD) CQ. RCPLD is based on the assumption that many subsets of the gradients of the active constraints preserve positive linear dependence locally. A major open question was to identify the exact set of gradients whose properties had to be preserved locally and that would still work as a CQ. This is done in the first new CQ, which we call the constant rank of the subspace component (CRSC) CQ. This new CQ also preserves many of the good properties of RCPLD, such as local stability and the validity of an error bound. We also introduce an even weaker CQ, called the constant positive generator (CPG), which can replace RCPLD in the analysis of the global convergence of algorithms. We close this work by extending convergence results of algorithms belonging to all the main classes of nonlinear optimization methods: sequential quadratic programming, augmented Lagrangians, ...


Journal of Optimization Theory and Applications | 2011

On Approximate KKT Condition and its Extension to Continuous Variational Inequalities

Gabriel Haeser; María Laura Schuverdt

In this work, we introduce a necessary sequential Approximate-Karush-Kuhn-Tucker (AKKT) condition for a point to be a solution of a continuous variational inequality, and we prove its relation with the Approximate Gradient Projection condition (AGP) of Gárciga-Otero and Svaiter. We also prove that a slight variation of the AKKT condition is sufficient for a convex problem, either for variational inequalities or optimization. Sequential necessary conditions are more suitable to iterative methods than usual punctual conditions relying on constraint qualifications. The AKKT property holds at a solution independently of the fulfillment of a constraint qualification, but when a weak one holds, we can guarantee the validity of the KKT conditions.


Computational Optimization and Applications | 2018

Augmented Lagrangians with constrained subproblems and convergence to second-order stationary points

Ernesto G. Birgin; Gabriel Haeser; Alberto Ramos

Augmented Lagrangian methods with convergence to second-order stationary points in which any constraint can be penalized or carried out to the subproblems are considered in this work. The resolution of each subproblem can be done by any numerical algorithm able to return approximate second-order stationary points. The developed global convergence theory is stronger than the ones known for current algorithms with convergence to second-order points in the sense that, besides the flexibility introduced by the general lower-level approach, it includes a loose requirement for the resolution of subproblems. The proposed approach relies on a weak constraint qualification, that allows Lagrange multipliers to be unbounded at the solution. It is also shown that second-order resolution of subproblems increases the chances of finding a feasible point, in the sense that limit points are second-order stationary for the problem of minimizing the squared infeasibility. The applicability of the proposed method is illustrated in numerical examples with ball-constrained subproblems.


Optimization Methods & Software | 2017

On second-order optimality conditions in nonlinear optimization

Roberto Andreani; Roger Behling; Gabriel Haeser; Paulo J. S. Silva

In this work we present new weak conditions that ensure the validity of necessary second-order optimality conditions (SOC) for nonlinear optimization. We are able to prove that weak and strong SOCs hold for all Lagrange multipliers using Abadie-type assumptions. We also prove weak and strong SOCs for at least one Lagrange multiplier imposing the Mangasarian–Fromovitz constraint qualification and a weak constant rank assumption.


Computational Optimization and Applications | 2018

A second-order optimality condition with first- and second-order complementarity associated with global convergence of algorithms

Gabriel Haeser

We develop a new notion of second-order complementarity with respect to the tangent subspace related to second-order necessary optimality conditions by the introduction of so-called tangent multipliers. We prove that around a local minimizer, a second-order stationarity residual can be driven to zero while controlling the growth of Lagrange multipliers and tangent multipliers, which gives a new second-order optimality condition without constraint qualifications stronger than previous ones associated with global convergence of algorithms. We prove that second-order variants of augmented Lagrangian (under an additional smoothness assumption based on the Lojasiewicz inequality) and interior point methods generate sequences satisfying our optimality condition. We present also a companion minimal constraint qualification, weaker than the ones known for second-order methods, that ensures usual global convergence results to a classical second-order stationary point. Finally, our optimality condition naturally suggests a definition of second-order stationarity suitable for the computation of iteration complexity bounds and for the definition of stopping criteria.


Journal of Optimization Theory and Applications | 2015

A Flexible Inexact-Restoration Method for Constrained Optimization

Luis Felipe Bueno; Gabriel Haeser; José Mario Martínez

We introduce a new flexible inexact-restoration algorithm for constrained optimization problems. In inexact-restoration methods, each iteration has two phases. The first phase aims at improving feasibility and the second phase aims to minimize a suitable objective function. In the second phase, we also impose bounded deterioration of the feasibility, obtained in the first phase. Here, we combine the basic ideas of the Fischer-Friedlander approach for inexact-restoration with the use of approximations of the Lagrange multipliers. We present a new option to obtain a range of search directions in the optimization phase, and we employ the sharp Lagrangian as merit function. Furthermore, we introduce a flexible way to handle sufficient decrease requirements and an efficient way to deal with the penalty parameter. Global convergence of the new inexact-restoration method to KKT points is proved under weak constraint qualifications.


Computational & Applied Mathematics | 2010

On the global convergence of interior-point nonlinear programming algorithms

Gabriel Haeser

Caratheodorys lemma states that if we have a linear combination of vectors in n, we can rewrite this combination using a linearly independent subset. This lemma has been successfully applied in nonlinear optimization in many contexts. In this work we present a new version of this celebrated result, in which we obtained new bounds for the size of the coefficients in the linear combination and we provide examples where these bounds are useful. We show how these new bounds can be used to prove that the internal penalty method converges to KKT points, and we prove that the hypothesis to obtain this result cannot be weakened.The new bounds also provides us some new results of convergence for the quasi feasible interior point l2-penalty method of Chen and Goldfarb [7]. Mathematical subject classification: 90C30, 49K99, 65K05.


Journal of Optimization Theory and Applications | 2018

On a conjecture in second-order optimality conditions

Roger Behling; Gabriel Haeser; Alberto Ramos; Daiana S. Viana

In this paper, we deal with a conjecture formulated in Andreani et al. (Optimization 56:529–542, 2007), which states that whenever a local minimizer of a nonlinear optimization problem fulfills the Mangasarian–Fromovitz constraint qualification and the rank of the set of gradients of active constraints increases at most by one in a neighborhood of the minimizer, a second-order optimality condition that depends on one single Lagrange multiplier is satisfied. This conjecture generalizes previous results under a constant rank assumption or under a rank deficiency of at most one. We prove the conjecture under the additional assumption that the Jacobian matrix has a smooth singular value decomposition. Our proof also extends to the case of the strong second-order condition, defined in terms of the critical cone instead of the critical subspace.

Collaboration


Dive into the Gabriel Haeser's collaboration.

Top Co-Authors

Avatar

Roberto Andreani

State University of Campinas

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alberto Ramos

Federal University of Paraná

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Luis Felipe Bueno

Federal University of São Paulo

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

A. Ramos

University of São Paulo

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fabio Antonio Dorini

Federal University of Technology - Paraná

View shared research outputs
Researchain Logo
Decentralizing Knowledge