Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mikhail V. Solodov is active.

Publication


Featured researches published by Mikhail V. Solodov.


Mathematical Programming | 2000

Forcing strong convergence of proximal point iterations in a Hilbert space

Mikhail V. Solodov; B. F. Svaiter

Abstract.This paper concerns with convergence properties of the classical proximal point algorithm for finding zeroes of maximal monotone operators in an infinite-dimensional Hilbert space. It is well known that the proximal point algorithm converges weakly to a solution under very mild assumptions. However, it was shown by Güler [11] that the iterates may fail to converge strongly in the infinite-dimensional case. We propose a new proximal-type algorithm which does converge strongly, provided the problem has a solution. Moreover, our algorithm solves proximal point subproblems inexactly, with a constructive stopping criterion introduced in [31]. Strong convergence is forced by combining proximal point iterations with simple projection steps onto intersection of two halfspaces containing the solution set. Additional cost of this extra projection step is essentially negligible since it amounts, at most, to solving a linear system of two equations in two unknowns.


Siam Journal on Control and Optimization | 1999

A New Projection Method for Variational Inequality Problems

Mikhail V. Solodov; B. F. Svaiter

We propose a new projection algorithm for solving the variational inequality problem, where the underlying function is continuous and satisfies a certain generalized monotonicity assumption (e.g., it can be pseudomonotone). The method is simple and admits a nice geometric interpretation. It consists of two steps. First, we construct an appropriate hyperplane which strictly separates the current iterate from the solutions of the problem. This procedure requires a single projection onto the feasible set and employs an Armijo-type linesearch along a feasible direction. Then the next iterate is obtained as the projection of the current iterate onto the intersection of the feasible set with the halfspace containing the solution set. Thus, in contrast with most other projection-type methods, only two projection operations per iteration are needed. The method is shown to be globally convergent to a solution of the variational inequality problem under minimal assumptions. Preliminary computational experience is also reported.


Mathematical Programming | 1993

Nonlinear complementarity as unconstrained and constrained minimization

Olvi L. Mangasarian; Mikhail V. Solodov

The nonlinear complementarity problem is cast as an unconstrained minimization problem that is obtained from an augmented Lagrangian formulation. The dimensionality of the unconstrained problem is the same as that of the original problem, and the penalty parameter need only be greater than one. Another feature of the unconstrained problem is that it has global minima of zero at precisely all the solution points of the complementarity problem without any monotonicity assumption. If the mapping of the complementarity problem is differentiable, then so is the objective of the unconstrained problem, and its gradient vanishes at all solution points of the complementarity problem. Under assumptions of nondegeneracy and linear independence of gradients of active constraints at a complementarity problem solution, the corresponding global unconstrained minimum point is locally unique. A Wolfe dual to a standard constrained optimization problem associated with the nonlinear complementarity problem is also formulated under a monotonicity and differentiability assumption. Most of the standard duality results are established even though the underlying constrained optimization problem may be nonconvex. Preliminary numerical tests on two small nonmonotone problems from the published literature converged to degenerate or nondegenerate solutions from all attempted starting points in 7 to 28 steps of a BFGS quasi-Newton method for unconstrained optimization.


Set-valued Analysis | 1999

A HYBRID APPROXIMATE EXTRAGRADIENT - PROXIMAL POINT ALGORITHM USING THE ENLARGEMENT OF A MAXIMAL MONOTONE OPERATOR

Mikhail V. Solodov; B. F. Svaiter

We propose a modification of the classical extragradient and proximal point algorithms for finding a zero of a maximal monotone operator in a Hilbert space. At each iteration of the method, an approximate extragradient-type step is performed using information obtained from an approximate solution of a proximal point subproblem. The algorithm is of a hybrid type, as it combines steps of the extragradient and proximal methods. Furthermore, the algorithm uses elements in the enlargement (proposed by Burachik, Iusem and Svaiter) of the operator defining the problem. One of the important features of our approach is that it allows significant relaxation of tolerance requirements imposed on the solution of proximal point subproblems. This yields a more practical proximal-algorithm-based framework. Weak global convergence and local linear rate of convergence are established under suitable assumptions. It is further demonstrated that the modified forward-backward splitting algorithm of Tseng falls within the presented general framework.


Mathematics of Operations Research | 2000

An Inexact Hybrid Generalized Proximal Point Algorithm and Some New Results on the Theory of Bregman Functions

Mikhail V. Solodov; B. F. Svaiter

We present a new Bregman-function-based algorithm which is a modification of the generalized proximal point method for solving the variational inequality problem with a maximal monotone operator. The principal advantage of the presented algorithm is that it allows a more constructive error tolerance criterion in solving the proximal point subproblems. Furthermore, we eliminate the assumption of pseudomonotonicity which was, until now, standard in proving convergence for paramonotone operators. Thus we obtain a convergence result which is new even for exact generalized proximal point methods. Finally, we present some new results on the theory of Bregman functions. For example, we show that the standard assumption of convergence consistency is a consequence of the other properties of Bregman functions, and is therefore superfluous.


Mathematical Programming | 1998

On the projected subgradient method for nonsmooth convex optimization in a Hilbert space

Ya. I. Alber; Alfredo N. Iusem; Mikhail V. Solodov

We consider the method for constrained convex optimization in a Hilbert space, consisting of a step in the direction opposite to anεk-subgradient of the objective at a current iterate, followed by an orthogonal projection onto the feasible set. The normalized stepsizesεk are exogenously given, satisfyingΣk=0∞ αk = ∞, Σk=0∞ αk2 < ∞, andεk is chosen so thatεk ⩽ μαk for someμ > 0. We prove that the sequence generated in this way is weakly convergent to a minimizer if the problem has solutions, and is unbounded otherwise. Among the features of our convergence analysis, we mention that it covers the nonsmooth case, in the sense that we make no assumption of differentiability off, and much less of Lipschitz continuity of its gradient. Also, we prove weak convergence of the whole sequence, rather than just boundedness of the sequence and optimality of its weak accumulation points, thus improving over all previously known convergence results. We present also convergence rate results.


Numerical Functional Analysis and Optimization | 2001

A UNIFIED FRAMEWORK FOR SOME INEXACT PROXIMAL POINT ALGORITHMS

Mikhail V. Solodov; B. F. Svaiter

We present a unified framework for the design and convergence analysis of a class of algorithms based on approximate solution of proximal point subproblems. Our development further enhances the constructive approximation approach of the recently proposed hybrid projection–proximal and extragradient–proximal methods. Specifically, we introduce an even more flexible error tolerance criterion, as well as provide a unified view of these two algorithms. Our general method possesses global convergence and local (super)linear rate of convergence under standard assumptions, while using a constructive approximation criterion suitable for a number of specific implementations. For example, we show that close to a regular solution of a monotone system of semismooth equations, two Newton iterations are sufficient to solve the proximal subproblem within the required error tolerance. Such systems of equations arise naturally when reformulating the nonlinear complementarity problem. *Research of the first author is suppo...


Optimization Methods & Software | 1994

Serial and parallel backpropagation convergence via nonmonotone perturbed minimization

Olvi L. Mangasarian; Mikhail V. Solodov

A general convergence theorem is proposed for a family of serial and parallel nonmonotone unconstrained minimization methods with perturbations. A principal application of the theorem is to establish convergence of backpropagation (BP), the classical algorithm for training artificial neural networks. Under certain natural assumptions, such as divergence of the sum of the learning rates and convergence of the sum of their squares, it is shown that every accumulation point of the BP iterates is a stationary point of the error function associated with the given set of training examples. The results presented cover serial and parallel BP, as well as modified BP with a momentum term.


Computational Optimization and Applications | 1998

Incremental Gradient Algorithms with Stepsizes Bounded Away from Zero

Mikhail V. Solodov

We consider the class of incremental gradient methods for minimizing a sum of continuously differentiable functions. An important novel feature of our analysis is that the stepsizes are kept bounded away from zero. We derive the first convergence results of any kind for this computationally important case. In particular, we show that a certain ε-approximate solution can be obtained and establish the linear dependence of ε on the stepsize limit. Incremental gradient methods are particularly well-suited for large neural network training problems where obtaining an approximate solution is typically sufficient and is often preferable to computing an exact solution. Thus, in the context of neural networks, the approach presented here is related to the principle of tolerant training. Our results justify numerous stepsize rules that were derived on the basis of extensive numerical experimentation but for which no theoretical analysis was previously available. In addition, convergence to (exact) stationary points is established when the gradient satisfies a certain growth property.


Mathematical Programming | 2000

Error bounds for proximal point subproblems and associated inexact proximal point algorithms

Mikhail V. Solodov; B. F. Svaiter

Abstract.We study various error measures for approximate solution of proximal point regularizations of the variational inequality problem, and of the closely related problem of finding a zero of a maximal monotone operator. A new merit function is proposed for proximal point subproblems associated with the latter. This merit function is based on Burachik-Iusem-Svaiter’s concept of ε-enlargement of a maximal monotone operator. For variational inequalities, we establish a precise relationship between the regularized gap function, which is a natural error measure in this context, and our new merit function. Some error bounds are derived using both merit functions for the corresponding formulations of the proximal subproblem. We further use the regularized gap function to devise a new inexact proximal point algorithm for solving monotone variational inequalities. This inexact proximal point method preserves all the desirable global and local convergence properties of the classical exact/inexact method, while providing a constructive error tolerance criterion, suitable for further practical applications. The use of other tolerance rules is also discussed.

Collaboration


Dive into the Mikhail V. Solodov's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

B. F. Svaiter

Instituto Nacional de Matemática Pura e Aplicada

View shared research outputs
Top Co-Authors

Avatar

Claudia A. Sagastizábal

Instituto Nacional de Matemática Pura e Aplicada

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Damián R. Fernández

National University of Cordoba

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andreas Fischer

Dresden University of Technology

View shared research outputs
Top Co-Authors

Avatar

E. I. Uskov

Moscow State University

View shared research outputs
Top Co-Authors

Avatar

Juan Pablo Luna

Instituto Nacional de Matemática Pura e Aplicada

View shared research outputs
Top Co-Authors

Avatar

Pablo A. Lotito

National Scientific and Technical Research Council

View shared research outputs
Researchain Logo
Decentralizing Knowledge