Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alfredo N. Iusem is active.

Publication


Featured researches published by Alfredo N. Iusem.


Archive | 2000

Totally convex functions for fixed points computation and infinite dimensional optimization

Dan Butnariu; Alfredo N. Iusem

Introduction. 1. Totally Convex Functions. 2. Computation of Fixed Points. 3. Infinite Dimensional Optimization. Bibliography. Index.


Optimization | 1997

A variant of korpelevich’s method for variational inequalities with a new search strategy

Alfredo N. Iusem; B. F. Svaiter

We present a variant of Korpelevichs method for variational inequality problems with monotone operators. Instead of a fixed and exogenously given stepsize, possible only when a Lipschitz constant for the operator exists and is known beforehand, we find an appropriate stepsize in each iteration through an Armijo-type search. Differently from other similar schemes, we perform only two projections onto the feasible set in each iteration, rather than one projection for each tentative step during the search, which represents a considerable saving when the projection is computationally expensive. A full convergence analysis is given, without any Lipschitz continuity assumption


Mathematical Programming | 1998

An interior point method with Bregman functions for the variational inequality problem with paramonotone operators

Yair Censor; Alfredo N. Iusem; Stavros A. Zenios

We present an algorithm for the variational inequality problem on convex sets with nonempty interior. The use of Bregman functions whose zone is the convex set allows for the generation of a sequence contained in the interior, without taking explicitly into account the constraints which define the convex set. We establish full convergence to a solution with minimal conditions upon the monotone operatorF, weaker than strong monotonicity or Lipschitz continuity, for instance, and including cases where the solution needs not be unique. We apply our algorithm to several relevant classes of convex sets, including orthants, boxes, polyhedra and balls, for which Bregman functions are presented which give rise to explicit iteration formulae, up to the determination of two scalar stepsizes, which can be found through finite search procedures.


Set-valued Analysis | 1997

Enlargement of Monotone Operators with Applications to Variational Inequalities

Regina S. Burachik; Alfredo N. Iusem; B. F. Svaiter

Given a point-to-set operator T, we introduce the operator Tε defined as Tε(x)= {u: 〈 u − v, x − y 〉 ≥ −ε for all y ɛ Rn, v ɛ T(y)}. When T is maximal monotone Tε inherits most properties of the ε-subdifferential, e.g. it is bounded on bounded sets, Tε(x) contains the image through T of a sufficiently small ball around x, etc. We prove these and other relevant properties of Tε, and apply it to generate an inexact proximal point method with generalized distances for variational inequalities, whose subproblems consist of solving problems of the form 0 ɛ Hε(x), while the subproblems of the exact method are of the form 0 ɛ H(x). If εk is the coefficient used in the kth iteration and the εks are summable, then the sequence generated by the inexact algorithm is still convergent to a solution of the original problem. If the original operator is well behaved enough, then the solution set of each subproblem contains a ball around the exact solution, and so each subproblem can be finitely solved.


Mathematics of Operations Research | 1994

Entropy-like proximal methods in convex programming

Alfredo N. Iusem; B. F. Svaiter; Marc Teboulle

We study an extension of the proximal method for convex programming, where the quadratic regularization kernel is substituted by a class of convex statistical distances, called φ- divergences , which are typically entropy-like in form. After establishing several basic properties of these quasi-distances, we present a convergence analysis of the resulting entropy-like proximal algorithm. Applying this algorithm to the dual of a convex program, we recover a wide class of nonquadratic multiplier methods and prove their convergence.


Mathematical Programming | 1998

On the projected subgradient method for nonsmooth convex optimization in a Hilbert space

Ya. I. Alber; Alfredo N. Iusem; Mikhail V. Solodov

We consider the method for constrained convex optimization in a Hilbert space, consisting of a step in the direction opposite to anεk-subgradient of the objective at a current iterate, followed by an orthogonal projection onto the feasible set. The normalized stepsizesεk are exogenously given, satisfyingΣk=0∞ αk = ∞, Σk=0∞ αk2 < ∞, andεk is chosen so thatεk ⩽ μαk for someμ > 0. We prove that the sequence generated in this way is weakly convergent to a minimizer if the problem has solutions, and is unbounded otherwise. Among the features of our convergence analysis, we mention that it covers the nonsmooth case, in the sense that we make no assumption of differentiability off, and much less of Lipschitz continuity of its gradient. Also, we prove weak convergence of the whole sequence, rather than just boundedness of the sequence and optimality of its weak accumulation points, thus improving over all previously known convergence results. We present also convergence rate results.


Siam Journal on Optimization | 1998

A Generalized Proximal Point Algorithm for the Variational Inequality Problem in a Hilbert Space

Regina S. Burachik; Alfredo N. Iusem

We consider a generalized proximal point method for solving variational inequality problems with monotone operators in a Hilbert space. It differs from the classical proximal point method (as discussed by Rockafellar for the problem of finding zeroes of monotone operators) in the use of generalized distances, called Bregman distances, instead of the Euclidean one. These distances play not only a regularization role but also a penalization one, forcing the sequence generated by the method to remain in the interior of the feasible set so that the method becomes an interior point one. Under appropriate assumptions on the Bregman distance and the monotone operator we prove that the sequence converges (weakly) if and only if the problem has solutions, in which case the weak limit is a solution. If the problem does not have solutions, then the sequence is unbounded. We extend similar previous results for the proximal point method with Bregman distances which dealt only with the finite dimensional case and which applied only to convex optimization problems or to finding zeroes of monotone operators, which are particular cases of variational inequality problems.


Nonlinear Analysis-theory Methods & Applications | 2003

New existence results for equilibrium problems

Alfredo N. Iusem; Wilfredo Sosa

We consider equilibrium problems in the framework of the formulation proposed by Blum and Oettli, which includes variational inequalities, Nash equilibria in noncooperative games, and vector optimization problems, for instance, as particular cases. We establish new sufficient and/or necessary conditions for existence of solutions of such problems. Our results are based upon the relation between equilibrium problems and certain auxiliary convex feasibility problems, together with extensions to equilibrium problems of gap functions for variational inequalities. Then we apply our results to some particular instances of equilibrium problems, obtaining results which include, among others, a new lemma of the alternative for convex optimization problems.


Optimization | 2003

Iterative Algorithms for Equilibrium Problems

Alfredo N. Iusem; Wilfredo Sosa

We consider equilibrium problems in the framework of the formulation proposed by Blum and Oettli, which includes variational inequalities, Nash equilibria in noncooperative games, and vector optimization problems, for instance, as particular cases. We show that such problems are particular instances of convex feasibility problems with infinitely many convex sets, but with additional structure, so that projection algorithms for convex feasibility can be modified in order to improve their convergence properties, mainly achieving global convergence without either compactness or coercivity assumptions. We present a sequential projections algorithm with an approximately most violated constraint control strategy, and two variants where exact orthogonal projections are replaced by approximate ones, using separating hyperplanes generated by subgradients. We include full convergence analysis of these algorithms.


Optimization | 1995

Full convergence of the steepest descent method with inexact line searches

Regina S. Burachik; L. M. Graña Drummond; Alfredo N. Iusem; B. F. Svaiter

Several finite procedures for determining the step size of the steepest descent method for unconstrained optimization, without performing exact one-dimensional minimizations, have been considered in the literature. The convergence analysis of these methods requires that the objective function have bounded level sets and that its gradient satisfy a Lipschitz condition, in order to establish just stationarity of all cluster points. We consider two of such procedures and prove, for a convex objective, convergence of the whole sequence to a minimizer without any level set boundedness assumption and, for one of them, without any Lipschitz condition.

Collaboration


Dive into the Alfredo N. Iusem's collaboration.

Top Co-Authors

Avatar

B. F. Svaiter

Instituto Nacional de Matemática Pura e Aplicada

View shared research outputs
Top Co-Authors

Avatar

Regina S. Burachik

University of South Australia

View shared research outputs
Top Co-Authors

Avatar

Alvaro R. De Pierro

State University of Campinas

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Elena Resmerita

Austrian Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jefferson G. Melo

Universidade Federal de Goiás

View shared research outputs
Top Co-Authors

Avatar

L. M. Graña Drummond

Federal University of Rio de Janeiro

View shared research outputs
Researchain Logo
Decentralizing Knowledge