Max L. N. Gonçalves
Universidade Federal de Goiás
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Max L. N. Gonçalves.
Journal of Complexity | 2011
O. P. Ferreira; Max L. N. Gonçalves; Paulo Roberto Oliveira
The Gauss-Newton method for solving nonlinear least squares problems is studied in this paper. Under the hypothesis that the derivative of the function associated with the least square problem satisfies a majorant condition, a local convergence analysis is presented. This analysis allows us to obtain the optimal convergence radius and the biggest range for the uniqueness of stationary point, and to unify two previous and unrelated results.
Siam Journal on Optimization | 2013
Orizon Perreira Ferreira; Max L. N. Gonçalves; Paulo Roberto Oliveira
Under the hypothesis that an initial point is a quasi-regular point, we use a majorant condition to present a new semilocal convergence analysis of an extension of the Gauss--Newton method for solving convex composite optimization problems. In this analysis the conditions and proof of convergence are simplified by using a simple majorant condition to define regions where a Gauss--Newton sequence is well behaved.
Journal of Computational and Applied Mathematics | 2012
O. P. Ferreira; Max L. N. Gonçalves; Paulo Roberto Oliveira
In this paper, we present a local convergence analysis of inexact Gauss-Newton like methods for solving nonlinear least squares problems. Under the hypothesis that the derivative of the function associated with the least squares problem satisfies a majorant condition, we obtain that the method is well-defined and converges. Our analysis provides a clear relationship between the majorant function and the function associated with the least squares problem. It also allows us to obtain an estimate of convergence ball for inexact Gauss-Newton like methods and some important, special cases.
Computational Optimization and Applications | 2011
O. P. Ferreira; Max L. N. Gonçalves
We present a local convergence analysis of inexact Newton-like methods for solving nonlinear equations under majorant conditions. This analysis provides an estimate of the convergence radius and a clear relationship between the majorant function, which relaxes the Lipschitz continuity of the derivative, and the nonlinear operator under consideration. It also allow us to obtain some important special cases.
Siam Journal on Optimization | 2017
Max L. N. Gonçalves; Jefferson G. Melo; Renato D. C. Monteiro
This paper describes a regularized variant of the alternating direction method of multipliers (ADMM) for solving linearly constrained convex programs. It is shown that the pointwise iteration-complexity of the new method is better than the corresponding one for the standard ADMM method and that, up to a logarithmic term, is identical to the ergodic iteration-complexity of the latter method. Our analysis is based on first presenting and establishing the pointwise iteration-complexity of a regularized non-Euclidean hybrid proximal extragradient framework whose error condition at each iteration includes both a relative error and a summable error. It is then shown that the new method is a special instance of the latter framework where the sequence of summable errors is identically zero when the ADMM stepsize is less than one or a nontrivial sequence when the stepsize is in the interval [1, (1 +\sqrt{5})/2).
Optimization | 2013
Max L. N. Gonçalves; Paulo Roberto Oliveira
Abstract In this paper, we study the Gauss–Newton method for a special class of systems of non-linear equation. On the hypothesis that the derivative of the function under consideration satisfies a majorant condition, semi-local convergence analysis is presented. In this analysis, the conditions and proof of convergence are simplified by using a simple majorant condition to define regions where the Gauss–Newton sequence is ‘well behaved’. Moreover, special cases of the general theory are presented as applications.
Computers & Mathematics With Applications | 2013
Max L. N. Gonçalves
A local convergence analysis of the Gauss-Newton method for solving injective-overdetermined systems of nonlinear equations under a majorant condition is provided. The convergence as well as results on its rate are established without a convexity hypothesis on the derivative of the majorant function. The optimal convergence radius, the biggest range for uniqueness of the solution along with some other special cases are also obtained.range for uniqueness of the solution along with some other special cases.
Journal of Computational and Applied Mathematics | 2017
Max L. N. Gonçalves; Jefferson G. Melo
In this paper, we consider the problem of solving a constrained system of nonlinear equations. We propose an algorithm based on a combination of the Newton and conditional gradient methods, and establish its local convergence analysis. Our analysis is set up by using a majorant condition technique, allowing us to prove in a unified way convergence results for two large families of nonlinear functions. The first one includes functions whose derivative satisfies a Holder-like condition, and the second one consists of a substantial subclass of analytic functions. Numerical experiments illustrating the applicability of the proposed method are presented, and comparisons with some other methods are discussed.
Journal of Global Optimization | 2015
Max L. N. Gonçalves; Jefferson G. Melo; L. F. Prudente
In this paper, we consider a nonlinear programming problem for which the constraint set may be infeasible. We propose an algorithm based on a large family of augmented Lagrangian functions and analyze its global convergence properties taking into account the possible infeasibility of the problem. We show that, in a finite number of iterations, the algorithm stops detecting the infeasibility of the problem or finds an approximate feasible/optimal solution with any required precision. We illustrate, by means of numerical experiments, that our algorithm is reliable for different Lagrangian/penalty functions proposed in the literature.
Journal of Optimization Theory and Applications | 2018
Max L. N. Gonçalves; M. Marques Alves; Jefferson G. Melo
In this paper, we obtain global pointwise and ergodic convergence rates for a variable metric proximal alternating direction method of multipliers for solving linearly constrained convex optimization problems. We first propose and study nonasymptotic convergence rates of a variable metric hybrid proximal extragradient framework for solving monotone inclusions. Then, the convergence rates for the former method are obtained essentially by showing that it falls within the latter framework. To the best of our knowledge, this is the first time that global pointwise (resp. pointwise and ergodic) convergence rates are obtained for the variable metric proximal alternating direction method of multipliers (resp. variable metric hybrid proximal extragradient framework).