O. P. Ferreira
Universidade Federal de Goiás
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by O. P. Ferreira.
Optimization | 2002
O. P. Ferreira; Paulo Roberto Oliveira
Abstract In this paper we consider the minimization problem with constraints. We will show that if the set of constraints is a Riemannian manifold of nonpositive sectional curvature, and the objective function is convex in this manifold, then the proximal point method in Euclidean space is naturally extended to solve that class of problems. We will prove that the sequence generated by our method is well defined and converge to a minimizer point. In particular we show how tools of Riemannian geometry, more specifically the convex analysis in Riemannian manifolds, can be used to solve nonconvex constrained problem in Euclidean, space.
Journal of Complexity | 2002
O. P. Ferreira; B. F. Svaiter
Newtons method for finding a zero of a vectorial function is a powerful theoretical and practical tool. One of the drawbacks of the classical convergence proof is that closeness to a non-singular zero must be supposed a priori. Kantorovichs theorem on Newtons method has the advantage of proving existence of a solution and convergence to it under very mild conditions. This theorem holds in Banach spaces. Newtons method has been extended to the problem of finding a singularity of a vectorial field in Riemannian manifold. We extend Kantorovichs theorem on Newtons method to Riemannian manifolds.
Computational Optimization and Applications | 2009
O. P. Ferreira; B. F. Svaiter
Abstract We prove Kantorovich’s theorem on Newton’s method using a convergence analysis which makes clear, with respect to Newton’s method, the relationship of the majorant function and the non-linear operator under consideration. This approach enables us to drop out the assumption of existence of a second root for the majorant function, still guaranteeing Q-quadratic convergence rate and to obtain a new estimate of this rate based on a directional derivative of the derivative of the majorant function. Moreover, the majorant function does not have to be defined beyond its first root for obtaining convergence rate results.
Journal of Optimization Theory and Applications | 1998
O. P. Ferreira; Paulo Roberto Oliveira
The subgradient method is generalized to the context of Riemannian manifolds. The motivation can be seen in non-Euclidean metrics that occur in interior-point methods. In that frame, the natural curves for local steps are the geodesies relative to the specific Riemannian manifold. In this paper, the influence of the sectional curvature of the manifold on the convergence of the method is discussed, as well as the proof of convergence if the sectional curvature is nonnegative.
Journal of Global Optimization | 2006
J. X. Cruz Neto; O. P. Ferreira; L.R. Lucambio Pérez; Sándor Németh
The problem of finding the singularities of monotone vectors fields on Hadamard manifolds will be considered and solved by extending the well-known proximal point algorithm. For monotone vector fields the algorithm will generate a well defined sequence, and for monotone vector fields with singularities it will converge to a singularity. It will also be shown how tools of convex analysis on Riemannian manifolds can solve non-convex constrained problems in Euclidean spaces. To illustrate this remarkable fact examples will be given.
Journal of Global Optimization | 2005
O. P. Ferreira; L.R. Lucambio Pérez; Sándor Németh
Abstract.Bearing in mind the notion of monotone vector field on Riemannian manifolds, see [12--16], we study the set of their singularities and for a particularclass of manifolds develop an extragradient-type algorithm convergent to singularities of such vector fields. In particular, our method can be used forsolving nonlinear constrained optimization problems in Euclidean space, with a convex objective function and the constraint set a constant curvature Hadamard manifold. Our paper shows how tools of convex analysis on Riemannian manifolds can be used to solve some nonconvex constrained problem in a Euclidean space.
Journal of Optimization Theory and Applications | 2012
G. C. Bento; O. P. Ferreira; Paulo Roberto Oliveira
In this paper, we present a steepest descent method with Armijo’s rule for multicriteria optimization in the Riemannian context. The sequence generated by the method is guaranteed to be well defined. Under mild assumptions on the multicriteria function, we prove that each accumulation point (if any) satisfies first-order necessary conditions for Pareto optimality. Moreover, assuming quasiconvexity of the multicriteria function and nonnegative curvature of the Riemannian manifold, we prove full convergence of the sequence to a critical Pareto point.
Journal of Complexity | 2011
O. P. Ferreira; Max L. N. Gonçalves; Paulo Roberto Oliveira
The Gauss-Newton method for solving nonlinear least squares problems is studied in this paper. Under the hypothesis that the derivative of the function associated with the least square problem satisfies a majorant condition, a local convergence analysis is presented. This analysis allows us to obtain the optimal convergence radius and the biggest range for the uniqueness of stationary point, and to unify two previous and unrelated results.
Optimization | 2015
G. C. Bento; O. P. Ferreira; Paulo Roberto Oliveira
In this article, we present the proximal point method for finding minima of a special class of nonconvex function on a Hadamard manifold. The well definedness of the sequence generated by the proximal point method is established. Moreover, it is proved that each accumulation point of this sequence satisfies the necessary optimality conditions and, under additional assumptions, its convergence for a minima is obtained.
Journal of Computational and Applied Mathematics | 2012
O. P. Ferreira; Max L. N. Gonçalves; Paulo Roberto Oliveira
In this paper, we present a local convergence analysis of inexact Gauss-Newton like methods for solving nonlinear least squares problems. Under the hypothesis that the derivative of the function associated with the least squares problem satisfies a majorant condition, we obtain that the method is well-defined and converges. Our analysis provides a clear relationship between the majorant function and the function associated with the least squares problem. It also allows us to obtain an estimate of convergence ball for inexact Gauss-Newton like methods and some important, special cases.