Paulo Roberto Oliveira
Federal University of Rio de Janeiro
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Paulo Roberto Oliveira.
Optimization | 2002
O. P. Ferreira; Paulo Roberto Oliveira
Abstract In this paper we consider the minimization problem with constraints. We will show that if the set of constraints is a Riemannian manifold of nonpositive sectional curvature, and the objective function is convex in this manifold, then the proximal point method in Euclidean space is naturally extended to solve that class of problems. We will prove that the sequence generated by our method is well defined and converge to a minimizer point. In particular we show how tools of Riemannian geometry, more specifically the convex analysis in Riemannian manifolds, can be used to solve nonconvex constrained problem in Euclidean, space.
Journal of Optimization Theory and Applications | 1998
O. P. Ferreira; Paulo Roberto Oliveira
The subgradient method is generalized to the context of Riemannian manifolds. The motivation can be seen in non-Euclidean metrics that occur in interior-point methods. In that frame, the natural curves for local steps are the geodesies relative to the specific Riemannian manifold. In this paper, the influence of the sectional curvature of the manifold on the convergence of the method is discussed, as well as the proof of convergence if the sectional curvature is nonnegative.
Journal of Optimization Theory and Applications | 2012
G. C. Bento; O. P. Ferreira; Paulo Roberto Oliveira
In this paper, we present a steepest descent method with Armijo’s rule for multicriteria optimization in the Riemannian context. The sequence generated by the method is guaranteed to be well defined. Under mild assumptions on the multicriteria function, we prove that each accumulation point (if any) satisfies first-order necessary conditions for Pareto optimality. Moreover, assuming quasiconvexity of the multicriteria function and nonnegative curvature of the Riemannian manifold, we prove full convergence of the sequence to a critical Pareto point.
Journal of Complexity | 2011
O. P. Ferreira; Max L. N. Gonçalves; Paulo Roberto Oliveira
The Gauss-Newton method for solving nonlinear least squares problems is studied in this paper. Under the hypothesis that the derivative of the function associated with the least square problem satisfies a majorant condition, a local convergence analysis is presented. This analysis allows us to obtain the optimal convergence radius and the biggest range for the uniqueness of stationary point, and to unify two previous and unrelated results.
Optimization | 2015
G. C. Bento; O. P. Ferreira; Paulo Roberto Oliveira
In this article, we present the proximal point method for finding minima of a special class of nonconvex function on a Hadamard manifold. The well definedness of the sequence generated by the proximal point method is established. Moreover, it is proved that each accumulation point of this sequence satisfies the necessary optimality conditions and, under additional assumptions, its convergence for a minima is obtained.
Siam Journal on Optimization | 2013
Orizon Perreira Ferreira; Max L. N. Gonçalves; Paulo Roberto Oliveira
Under the hypothesis that an initial point is a quasi-regular point, we use a majorant condition to present a new semilocal convergence analysis of an extension of the Gauss--Newton method for solving convex composite optimization problems. In this analysis the conditions and proof of convergence are simplified by using a simple majorant condition to define regions where a Gauss--Newton sequence is well behaved.
Journal of Computational and Applied Mathematics | 2012
O. P. Ferreira; Max L. N. Gonçalves; Paulo Roberto Oliveira
In this paper, we present a local convergence analysis of inexact Gauss-Newton like methods for solving nonlinear least squares problems. Under the hypothesis that the derivative of the function associated with the least squares problem satisfies a majorant condition, we obtain that the method is well-defined and converges. Our analysis provides a clear relationship between the majorant function and the function associated with the least squares problem. It also allows us to obtain an estimate of convergence ball for inexact Gauss-Newton like methods and some important, special cases.
Optimization | 2012
Felipe García Moreno; Paulo Roberto Oliveira; Antoine Soubeyran
We consider a proximal algorithm with quasi distance applied to nonconvex and nonsmooth functions involving analytic properties for a minimization problem. We show the behavioural importance of this proximal point model for the formation of habit in Decision and Making Sciences. The convergence of the sequence generated by our algorithm to a critical-limit point is guaranteed under standard conditions of coercivity and by using the Kurdyka–Łojasiewicz inequality. We present a definition of habit that contemplates that kind of convergence.
European Journal of Operational Research | 2010
Sissy da S. Souza; Paulo Roberto Oliveira; J.X. da Cruz Neto; Antoine Soubeyran
We present an interior proximal method with Bregman distance, for solving the minimization problem with quasiconvex objective function under nonnegative constraints. The Bregman function is considered separable and zone coercive, and the zone is the interior of the positive orthant. Under the assumption that the solution set is nonempty and the objective function is continuously differentiable, we establish the well definedness of the sequence generated by our algorithm and obtain two important convergence results, and show in the main one that the sequence converges to a solution point of the problem when the regularization parameters go to zero.
Journal of Global Optimization | 2015
João Carlos O. Souza; Paulo Roberto Oliveira
An extension of a proximal point algorithm for difference of two convex functions is presented in the context of Riemannian manifolds of nonposite sectional curvature. If the sequence generated by our algorithm is bounded it is proved that every cluster point is a critical point of the function (not necessarily convex) under consideration, even if minimizations are performed inexactly at each iteration. Application in maximization problems with constraints, within the framework of Hadamard manifolds is presented.