Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Regina S. Burachik is active.

Publication


Featured researches published by Regina S. Burachik.


Set-valued Analysis | 1997

Enlargement of Monotone Operators with Applications to Variational Inequalities

Regina S. Burachik; Alfredo N. Iusem; B. F. Svaiter

Given a point-to-set operator T, we introduce the operator Tε defined as Tε(x)= {u: 〈 u − v, x − y 〉 ≥ −ε for all y ɛ Rn, v ɛ T(y)}. When T is maximal monotone Tε inherits most properties of the ε-subdifferential, e.g. it is bounded on bounded sets, Tε(x) contains the image through T of a sufficiently small ball around x, etc. We prove these and other relevant properties of Tε, and apply it to generate an inexact proximal point method with generalized distances for variational inequalities, whose subproblems consist of solving problems of the form 0 ɛ Hε(x), while the subproblems of the exact method are of the form 0 ɛ H(x). If εk is the coefficient used in the kth iteration and the εks are summable, then the sequence generated by the inexact algorithm is still convergent to a solution of the original problem. If the original operator is well behaved enough, then the solution set of each subproblem contains a ball around the exact solution, and so each subproblem can be finitely solved.


Siam Journal on Optimization | 1998

A Generalized Proximal Point Algorithm for the Variational Inequality Problem in a Hilbert Space

Regina S. Burachik; Alfredo N. Iusem

We consider a generalized proximal point method for solving variational inequality problems with monotone operators in a Hilbert space. It differs from the classical proximal point method (as discussed by Rockafellar for the problem of finding zeroes of monotone operators) in the use of generalized distances, called Bregman distances, instead of the Euclidean one. These distances play not only a regularization role but also a penalization one, forcing the sequence generated by the method to remain in the interior of the feasible set so that the method becomes an interior point one. Under appropriate assumptions on the Bregman distance and the monotone operator we prove that the sequence converges (weakly) if and only if the problem has solutions, in which case the weak limit is a solution. If the problem does not have solutions, then the sequence is unbounded. We extend similar previous results for the proximal point method with Bregman distances which dealt only with the finite dimensional case and which applied only to convex optimization problems or to finding zeroes of monotone operators, which are particular cases of variational inequality problems.


Optimization | 1995

Full convergence of the steepest descent method with inexact line searches

Regina S. Burachik; L. M. Graña Drummond; Alfredo N. Iusem; B. F. Svaiter

Several finite procedures for determining the step size of the steepest descent method for unconstrained optimization, without performing exact one-dimensional minimizations, have been considered in the literature. The convergence analysis of these methods requires that the objective function have bounded level sets and that its gradient satisfy a Lipschitz condition, in order to establish just stationarity of all cluster points. We consider two of such procedures and prove, for a convex objective, convergence of the whole sequence to a minimizer without any level set boundedness assumption and, for one of them, without any Lipschitz condition.


Siam Journal on Control and Optimization | 2000

A Proximal Point Method for the Variational Inequality Problem in Banach Spaces

Regina S. Burachik; Susana Scheimberg

In this paper we prove well-definedness and weak convergence of the generalized proximal point method when applied to the variational inequality problem in reflexive Banach spaces. The proximal version we consider makes use of Bregman functions, whose original definition for finite dimensional spaces has here been properly extended to our more general framework.


Computational Optimization and Applications | 2000

Iterative Methods of Solving Stochastic Convex Feasibility Problems andApplications

Dan Butnariu; Alfredo N. Iusem; Regina S. Burachik

The stochastic convex feasibility problem (SCFP) is the problem of finding almost common points of measurable families of closed convex subsets in reflexive and separable Banach spaces. In this paper we prove convergence criteria for two iterative algorithms devised to solve SCFPs. To do that, we first analyze the concepts of Bregman projection and Bregman function with emphasis on the properties of their local moduli of convexity. The areas of applicability of the algorithms we present include optimization problems, linear operator equations, inverse problems, etc., which can be represented as SCFPs and solved as such. Examples showing how these algorithms can be implemented are also given.


Archive | 1999

Bundle Methods for Maximal Monotone Operators

Regina S. Burachik; Claudia A. Sagastizábal; B. F. Svaiter

To find a zero of a maximal monotone operator T we use an enlargement T e playing the role of the e-subdifferential in nonsmooth optimization. We define a convergent and implementable algorithm which combines projection ideas with bundle-like techniques and a transportation formula. More precisely, first we separate the current iterate x k from the zeros of T by computing the direction of minimum norm in a polyhedral approximation of T e k (x k ). Then suitable elements defining such polyhedral approximations are selected following a bundle strategy. Finally, the next iterate is computed by projecting x k onto the corresponding separating hyperplane.


Siam Journal on Optimization | 2007

Abstract Convexity and Augmented Lagrangians

Regina S. Burachik; Alexander M. Rubinov

The ultimate goal of this paper is to demonstrate that abstract convexity provides a natural language and a suitable framework for the examination of zero duality gap properties and exact multipliers of augmented Lagrangians. We study augmented Lagrangians in a very general setting and formulate the main definitions and facts describing the augmented Lagrangian theory in terms of abstract convexity tools. We illustrate our duality scheme with an application to stochastic semi-infinite optimization.


Abstract and Applied Analysis | 1997

A PROXIMAL POINT METHOD FOR NONSMOOTH CONVEX OPTIMIZATION PROBLEMS IN BANACH SPACES

Ya. I. Alber; Regina S. Burachik; Alfredo N. Iusem

In this paper we show the weak convergence and stability of the proximal point method when applied to the constrained convex optimiza- tion problem in uniformly convex and uniformly smooth Banach spaces. In addition, we establish a nonasymptotic estimate of convergence rate of the sequence of functional values for the unconstrained case. This estimate depends on a geometric characteristic of the dual Banach space, namely its modulus of convexity. We apply a new technique which includes Banach space geometry, estimates of duality mappings, nonstandard Lyapunov func- tionals and generalized projection operators in Banach spaces.


Computational & Applied Mathematics | 2009

An inexact interior point proximal method for the variational inequality problem

Regina S. Burachik; Jurandir O. Lopes; Geci J.P. Da Silva

We propose an infeasible interior proximal method for solving variational inequality problems with maximal monotone operators and linear constraints. The interior proximal method proposed by Auslender, Teboulle and Ben-Tiba [3] is a proximal method using a distance-like barrier function and it has a global convergence property under mild assumptions. However, this method is applicable only to problems whose feasible region has nonempty interior. The algorithm we propose is applicable to problems whose feasible region may have empty interior. Moreover, a new kind of inexact scheme is used. We present a full convergence analysis for our algorithm.


Journal of Optimization Theory and Applications | 2014

A New Scalarization Technique to Approximate Pareto Fronts of Problems with Disconnected Feasible Sets

Regina S. Burachik; C. Y. Kaya; M. M. Rizvi

We introduce and analyze a novel scalarization technique and an associated algorithm for generating an approximation of the Pareto front (i.e., the efficient set) of nonlinear multiobjective optimization problems. Our approach is applicable to nonconvex problems, in particular to those with disconnected Pareto fronts and disconnected domains (i.e., disconnected feasible sets). We establish the theoretical properties of our new scalarization technique and present an algorithm for its implementation. By means of test problems, we illustrate the strengths and advantages of our approach over existing scalarization techniques such as those derived from the Pascoletti–Serafini method, as well as the popular weighted-sum method.

Collaboration


Dive into the Regina S. Burachik's collaboration.

Top Co-Authors

Avatar

Alfredo N. Iusem

Instituto Nacional de Matemática Pura e Aplicada

View shared research outputs
Top Co-Authors

Avatar

C. Yalçın Kaya

University of South Australia

View shared research outputs
Top Co-Authors

Avatar

M. M. Rizvi

University of South Australia

View shared research outputs
Top Co-Authors

Avatar

Jefferson G. Melo

Universidade Federal de Goiás

View shared research outputs
Top Co-Authors

Avatar

C. Y. Kaya

University of South Australia

View shared research outputs
Top Co-Authors

Avatar

Juan Enrique Martínez-Legaz

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

B. F. Svaiter

Instituto Nacional de Matemática Pura e Aplicada

View shared research outputs
Top Co-Authors

Avatar

Heinz H. Bauschke

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Liangjin Yao

University of Newcastle

View shared research outputs
Researchain Logo
Decentralizing Knowledge