M. Marques Alves
Instituto Nacional de Matemática Pura e Aplicada
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by M. Marques Alves.
Inverse Problems | 2007
A Leitão; M. Marques Alves
Two methods of level set type are proposed for solving the Cauchy problem for an elliptic equation. Convergence and stability results for both methods are proven, characterizing the iterative methods as regularization methods for this ill-posed problem. Some numerical experiments are presented, showing the efficiency of our approaches and verifying the convergence results.
Inverse Problems | 2012
A Leitão; M. Marques Alves
In this paper, iterative regularization methods of Landweber–Kaczmarz type are considered for solving systems of ill-posed equations modeled (finitely many) by operators acting between Banach spaces. Using assumptions of uniform convexity and smoothness on the parameter space, we are able to prove a monotony result for the proposed method, as well as to establish convergence (for exact data) and stability results (in the noisy data case).
Siam Journal on Optimization | 2016
M. Marques Alves; Renato D. C. Monteiro; B. F. Svaiter
This paper studies the iteration-complexity of new regularized hybrid proximal extragradient (HPE)-type methods for solving monotone inclusion problems (MIPs). The new (regularized HPE-type) methods essentially consist of instances of the standard HPE method applied to regularizations of the original MIP. It is shown that its pointwise iteration-complexity considerably improves the one of the HPE method while approaches (up to a logarithmic factor) the ergodic iteration-complexity of the latter method.
Mathematical Programming | 2016
M. Marques Alves; B. F. Svaiter
In a recent Math. Program. paper, Eckstein and Silva proposed a new error criterion for the approximate solutions of augmented Lagrangian subproblems. Based on a saddle-point formulation of the primal and dual problems, they proved that dual sequences generated by augmented Lagrangians under this error criterion are bounded and that their limit points are dual solutions. In this note, we prove a new result about the convergence of Fejér-monotone sequences in product spaces (which seems to be interesting by itself) and, as a consequence, we obtain the full convergence of the dual sequence generated by augmented Lagrangians under Eckstein and Silva’s criterion.
Journal of Optimization Theory and Applications | 2018
Max L. N. Gonçalves; M. Marques Alves; Jefferson G. Melo
In this paper, we obtain global pointwise and ergodic convergence rates for a variable metric proximal alternating direction method of multipliers for solving linearly constrained convex optimization problems. We first propose and study nonasymptotic convergence rates of a variable metric hybrid proximal extragradient framework for solving monotone inclusions. Then, the convergence rates for the former method are obtained essentially by showing that it falls within the latter framework. To the best of our knowledge, this is the first time that global pointwise (resp. pointwise and ergodic) convergence rates are obtained for the variable metric proximal alternating direction method of multipliers (resp. variable metric hybrid proximal extragradient framework).
Journal of Optimization Theory and Applications | 2013
M. Marques Alves; Jefferson G. Melo
We analyze a primal-dual pair of problems generated via a duality theory introduced by Svaiter. We propose a general algorithm and study its convergence properties. The focus is a general primal-dual principle for strong convergence of some classes of algorithms. In particular, we give a different viewpoint for the weak-to-strong principle of Bauschke and Combettes and unify many results concerning weak and strong convergence of subgradient type methods.
Journal of Optimization Theory and Applications | 2017
M. Marques Alves; Samara Costa Lima
We propose and study the iteration-complexity of an inexact version of the Spingarn’s partial inverse method. Its complexity analysis is performed by viewing it in the framework of the hybrid proximal extragradient method, for which pointwise and ergodic iteration-complexity has been established recently by Monteiro and Svaiter. As applications, we propose and analyze the iteration-complexity of an inexact operator splitting algorithm—which generalizes the original Spingarn’s splitting method—and of a parallel forward–backward algorithm for multi-term composite convex optimization.
Optimization | 2018
M. Marques Alves; B. F. Svaiter
Abstract We propose and study the iteration-complexity of a proximal-Newton method for finding approximate solutions of the problem of minimizing a twice continuously differentiable convex function on a (possibly infinite dimensional) Hilbert space. We prove global convergence rates for obtaining approximate solutions in terms of function/gradient values. Our main results follow from an iteration-complexity study of an (large-step) inexact proximal point method for solving convex minimization problems.
Optimization | 2018
M. Marques Alves; Samara Costa Lima
Abstract Relying on fixed point techniques, Mahey, Oualibouch and Tao introduced in a 1995 paper the scaled proximal decomposition on the graph of a maximal monotone operator (SPDG) algorithm and analysed its performance on inclusions for strongly monotone and Lipschitz continuous operators. The SPDG algorithm generalizes the Spingarn’s partial inverse method by allowing scaling factors, a key strategy to speed up the convergence of numerical algorithms. In this note, we show that the SPDG algorithm can alternatively be analysed by means of the original Spingarn’s partial inverse framework, tracing back to the 1983 Spingarn’s paper. We simply show that under the assumptions considered in by Mahey, Oualibouch and Tao, the Spingarn’s partial inverse of the underlying maximal monotone operator is strongly monotone, which allows one to employ recent results on the convergence and iteration-complexity of proximal point-type methods for strongly monotone operators. By doing this, we additionally obtain a potentially faster convergence for the SPDG algorithm and a more accurate upper bound on the number of iterations needed to achieve prescribed tolerances, specially on ill-conditioned problems.
Numerical Algorithms | 2018
M. Marques Alves; Marina Geremia
In this paper, we propose and study the iteration complexity of an inexact Douglas-Rachford splitting (DRS) method and a Douglas-Rachford-Tseng’s forward-backward (F-B) splitting method for solving two-operator and four-operator monotone inclusions, respectively. The former method (although based on a slightly different mechanism of iteration) is motivated by the recent work of J. Eckstein and W. Yao, in which an inexact DRS method is derived from a special instance of the hybrid proximal extragradient (HPE) method of Solodov and Svaiter, while the latter one combines the proposed inexact DRS method (used as an outer iteration) with a Tseng’s F-B splitting-type method (used as an inner iteration) for solving the corresponding subproblems. We prove iteration complexity bounds for both algorithms in the pointwise (non-ergodic) as well as in the ergodic sense by showing that they admit two different iterations: one that can be embedded into the HPE method, for which the iteration complexity is known since the work of Monteiro and Svaiter, and another one which demands a separate analysis. Finally, we perform simple numerical experiments to show the performance of the proposed methods when compared with other existing algorithms.