Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where B. F. Svaiter is active.

Publication


Featured researches published by B. F. Svaiter.


Mathematical Programming | 2013

Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward–backward splitting, and regularized Gauss–Seidel methods

Hedy Attouch; Jérôme Bolte; B. F. Svaiter

In view of the minimization of a nonsmooth nonconvex function f, we prove an abstract convergence result for descent methods satisfying a sufficient-decrease assumption, and allowing a relative error tolerance. Our result guarantees the convergence of bounded sequences, under the assumption that the function f satisfies the Kurdyka–Łojasiewicz inequality. This assumption allows to cover a wide range of problems, including nonsmooth semi-algebraic (or more generally tame) minimization. The specialization of our result to different kinds of structured problems provides several new convergence results for inexact versions of the gradient method, the proximal method, the forward–backward splitting algorithm, the gradient projection and some proximal regularization of the Gauss–Seidel method in a nonconvex setting. Our results are illustrated through feasibility problems, or iterative thresholding procedures for compressive sensing.


Mathematical Programming | 2000

Forcing strong convergence of proximal point iterations in a Hilbert space

Mikhail V. Solodov; B. F. Svaiter

Abstract.This paper concerns with convergence properties of the classical proximal point algorithm for finding zeroes of maximal monotone operators in an infinite-dimensional Hilbert space. It is well known that the proximal point algorithm converges weakly to a solution under very mild assumptions. However, it was shown by Güler [11] that the iterates may fail to converge strongly in the infinite-dimensional case. We propose a new proximal-type algorithm which does converge strongly, provided the problem has a solution. Moreover, our algorithm solves proximal point subproblems inexactly, with a constructive stopping criterion introduced in [31]. Strong convergence is forced by combining proximal point iterations with simple projection steps onto intersection of two halfspaces containing the solution set. Additional cost of this extra projection step is essentially negligible since it amounts, at most, to solving a linear system of two equations in two unknowns.


Siam Journal on Control and Optimization | 1999

A New Projection Method for Variational Inequality Problems

Mikhail V. Solodov; B. F. Svaiter

We propose a new projection algorithm for solving the variational inequality problem, where the underlying function is continuous and satisfies a certain generalized monotonicity assumption (e.g., it can be pseudomonotone). The method is simple and admits a nice geometric interpretation. It consists of two steps. First, we construct an appropriate hyperplane which strictly separates the current iterate from the solutions of the problem. This procedure requires a single projection onto the feasible set and employs an Armijo-type linesearch along a feasible direction. Then the next iterate is obtained as the projection of the current iterate onto the intersection of the feasible set with the halfspace containing the solution set. Thus, in contrast with most other projection-type methods, only two projection operations per iteration are needed. The method is shown to be globally convergent to a solution of the variational inequality problem under minimal assumptions. Preliminary computational experience is also reported.


Mathematical Methods of Operations Research | 2000

Steepest descent methods for multicriteria optimization

Jörg Fliege; B. F. Svaiter

Abstract. We propose a steepest descent method for unconstrained multicriteria optimization and a “feasible descent direction” method for the constrained case. In the unconstrained case, the objective functions are assumed to be continuously differentiable. In the constrained case, objective and constraint functions are assumed to be Lipshitz-continuously differentiable and a constraint qualification is assumed. Under these conditions, it is shown that these methods converge to a point satisfying certain first-order necessary conditions for Pareto optimality. Both methods do not scalarize the original vector optimization problem. Neither ordering information nor weighting factors for the different objective functions are assumed to be known. In the single objective case, we retrieve the Steepest descent method and Zoutendijks method of feasible directions, respectively.


Set-valued Analysis | 1999

A HYBRID APPROXIMATE EXTRAGRADIENT - PROXIMAL POINT ALGORITHM USING THE ENLARGEMENT OF A MAXIMAL MONOTONE OPERATOR

Mikhail V. Solodov; B. F. Svaiter

We propose a modification of the classical extragradient and proximal point algorithms for finding a zero of a maximal monotone operator in a Hilbert space. At each iteration of the method, an approximate extragradient-type step is performed using information obtained from an approximate solution of a proximal point subproblem. The algorithm is of a hybrid type, as it combines steps of the extragradient and proximal methods. Furthermore, the algorithm uses elements in the enlargement (proposed by Burachik, Iusem and Svaiter) of the operator defining the problem. One of the important features of our approach is that it allows significant relaxation of tolerance requirements imposed on the solution of proximal point subproblems. This yields a more practical proximal-algorithm-based framework. Weak global convergence and local linear rate of convergence are established under suitable assumptions. It is further demonstrated that the modified forward-backward splitting algorithm of Tseng falls within the presented general framework.


Mathematics of Operations Research | 2000

An Inexact Hybrid Generalized Proximal Point Algorithm and Some New Results on the Theory of Bregman Functions

Mikhail V. Solodov; B. F. Svaiter

We present a new Bregman-function-based algorithm which is a modification of the generalized proximal point method for solving the variational inequality problem with a maximal monotone operator. The principal advantage of the presented algorithm is that it allows a more constructive error tolerance criterion in solving the proximal point subproblems. Furthermore, we eliminate the assumption of pseudomonotonicity which was, until now, standard in proving convergence for paramonotone operators. Thus we obtain a convergence result which is new even for exact generalized proximal point methods. Finally, we present some new results on the theory of Bregman functions. For example, we show that the standard assumption of convergence consistency is a consequence of the other properties of Bregman functions, and is therefore superfluous.


Optimization | 1997

A variant of korpelevich’s method for variational inequalities with a new search strategy

Alfredo N. Iusem; B. F. Svaiter

We present a variant of Korpelevichs method for variational inequality problems with monotone operators. Instead of a fixed and exogenously given stepsize, possible only when a Lipschitz constant for the operator exists and is known beforehand, we find an appropriate stepsize in each iteration through an Armijo-type search. Differently from other similar schemes, we perform only two projections onto the feasible set in each iteration, rather than one projection for each tentative step during the search, which represents a considerable saving when the projection is computationally expensive. A full convergence analysis is given, without any Lipschitz continuity assumption


Set-valued Analysis | 1997

Enlargement of Monotone Operators with Applications to Variational Inequalities

Regina S. Burachik; Alfredo N. Iusem; B. F. Svaiter

Given a point-to-set operator T, we introduce the operator Tε defined as Tε(x)= {u: 〈 u − v, x − y 〉 ≥ −ε for all y ɛ Rn, v ɛ T(y)}. When T is maximal monotone Tε inherits most properties of the ε-subdifferential, e.g. it is bounded on bounded sets, Tε(x) contains the image through T of a sufficiently small ball around x, etc. We prove these and other relevant properties of Tε, and apply it to generate an inexact proximal point method with generalized distances for variational inequalities, whose subproblems consist of solving problems of the form 0 ɛ Hε(x), while the subproblems of the exact method are of the form 0 ɛ H(x). If εk is the coefficient used in the kth iteration and the εks are summable, then the sequence generated by the inexact algorithm is still convergent to a solution of the original problem. If the original operator is well behaved enough, then the solution set of each subproblem contains a ball around the exact solution, and so each subproblem can be finitely solved.


Siam Journal on Optimization | 2009

Newton's Method for Multiobjective Optimization

Jörg Fliege; L. M. Graña Drummond; B. F. Svaiter

We propose an extension of Newtons method for unconstrained multiobjective optimization (multicriteria optimization). This method does not use a priori chosen weighting factors or any other form of a priori ranking or ordering information for the different objective functions. Newtons direction at each iterate is obtained by minimizing the max-ordering scalarization of the variations on the quadratic approximations of the objective functions. The objective functions are assumed to be twice continuously differentiable and locally strongly convex. Under these hypotheses, the method, as in the classical case, is locally superlinear convergent to optimal points. Again as in the scalar case, if the second derivatives are Lipschitz continuous, the rate of convergence is quadratic. Our convergence analysis uses a Kantorovich-like technique. As a byproduct, existence of optima is obtained under semilocal assumptions.


Mathematics of Operations Research | 1994

Entropy-like proximal methods in convex programming

Alfredo N. Iusem; B. F. Svaiter; Marc Teboulle

We study an extension of the proximal method for convex programming, where the quadratic regularization kernel is substituted by a class of convex statistical distances, called φ- divergences , which are typically entropy-like in form. After establishing several basic properties of these quasi-distances, we present a convergence analysis of the resulting entropy-like proximal algorithm. Applying this algorithm to the dual of a convex program, we recover a wide class of nonquadratic multiplier methods and prove their convergence.

Collaboration


Dive into the B. F. Svaiter's collaboration.

Top Co-Authors

Avatar

Alfredo N. Iusem

Instituto Nacional de Matemática Pura e Aplicada

View shared research outputs
Top Co-Authors

Avatar

M. Marques Alves

Instituto Nacional de Matemática Pura e Aplicada

View shared research outputs
Top Co-Authors

Avatar

Mikhail V. Solodov

Instituto Nacional de Matemática Pura e Aplicada

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Renato D. C. Monteiro

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

L. M. Graña Drummond

Federal University of Rio de Janeiro

View shared research outputs
Top Co-Authors

Avatar

Estrada Dona Castorina

Instituto Nacional de Matemática Pura e Aplicada

View shared research outputs
Top Co-Authors

Avatar

Regina Sandra Burachik

Federal University of Rio de Janeiro

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

O. P. Ferreira

Universidade Federal de Goiás

View shared research outputs
Researchain Logo
Decentralizing Knowledge