Vyacheslav Kungurtsev
Czech Technical University in Prague
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Vyacheslav Kungurtsev.
IEEE Transactions on Signal Processing | 2015
Amir Daneshmand; Francisco Facchinei; Vyacheslav Kungurtsev; Gesualdo Scutari
We propose a decomposition framework for the parallel optimization of the sum of a differentiable (possibly nonconvex) function and a nonsmooth (possibly nonseparable), convex one. The latter term is usually employed to enforce structure in the solution, typically sparsity. The main contribution of this work is a novel parallel, hybrid random/deterministic decomposition scheme wherein, at each iteration, a subset of (block) variables is updated at the same time by minimizing a convex surrogate of the original nonconvex function. To tackle huge-scale problems, the (block) variables to be updated are chosen according to a mixed random and deterministic procedure, which captures the advantages of both pure deterministic and random update-based schemes. Almost sure convergence of the proposed scheme is established. Numerical results show that on huge-scale problems the proposed hybrid random/deterministic algorithm compares favorably to random and deterministic schemes on both convex and nonconvex problems.
Mathematical Programming | 2017
Philip E. Gill; Vyacheslav Kungurtsev; Daniel P. Robinson
Stabilized sequential quadratic programming (sSQP) methods for nonlinear optimization generate a sequence of iterates with fast local convergence regardless of whether or not the active-constraint gradients are linearly dependent. This paper concerns the local convergence analysis of an sSQP method that uses a line search with a primal-dual augmented Lagrangian merit function to enforce global convergence. The method is provably well-defined and is based on solving a strictly convex quadratic programming subproblem at each iteration. It is shown that the method has superlinear local convergence under assumptions that are no stronger than those required by conventional stabilized SQP methods. The fast local convergence is obtained by allowing a small relaxation of the optimality conditions for the quadratic programming subproblem in the neighborhood of a solution. In the limit, the line search selects the unit step length, which implies that the method does not suffer from the Maratos effect. The analysis indicates that the method has the same strong first- and second-order global convergence properties that have been established for augmented Lagrangian methods, yet is able to transition seamlessly to sSQP with fast local convergence in the neighborhood of a solution. Numerical results on some degenerate problems are reported.
Computational Optimization and Applications | 2014
Vyacheslav Kungurtsev; Moritz Diehl
Sequential quadratic programming (SQP) methods are known to be efficient for solving a series of related nonlinear optimization problems because of desirable hot and warm start properties—a solution for one problem is a good estimate of the solution of the next. However, standard SQP solvers contain elements to enforce global convergence that can interfere with the potential to take advantage of these theoretical local properties in full. We present two new predictor–corrector procedures for solving a nonlinear program given a sufficiently accurate estimate of the solution of a similar problem. The procedures attempt to trace a homotopy path between solutions of the two problems, staying within the local domain of convergence for the series of problems generated. We provide theoretical convergence and tracking results, as well as some numerical results demonstrating the robustness and performance of the methods.
asilomar conference on signals, systems and computers | 2014
Amir Daneshmand; Francisco Facchinei; Vyacheslav Kungurtsev; Gesualdo Scutari
We propose a decomposition framework for the parallel optimization of the sum of a differentiable (possibly nonconvex) function and a nonsmooth (separable), convex one. The latter term is usually employed to enforce structure in the solution, typically sparsity. The main contribution of this work is a novel parallel, hybrid random/deterministic decomposition scheme wherein, at each iteration, a subset of (block) variables is updated at the same time by minimizing local convex approximations of the original nonconvex function. To tackle with huge-scale problems, the (block) variables to be updated are chosen according to a mixed random and deterministic procedure, which captures the advantages of both pure deterministic and random update-based schemes. Almost sure convergence of the proposed scheme is established. Numerical results on huge-scale problems show that the proposed algorithm outperforms current schemes.
international symposium on information theory | 2017
Malcolm Egan; Samir Medina Perlaza; Vyacheslav Kungurtsev
In this paper, a new framework based on the notion of capacity sensitivity is introduced to study the capacity of continuous memoryless point-to-point channels. The capacity sensitivity reflects how the capacity changes with small perturbations in any of the parameters describing the channel, even when the capacity is not available in closed-form. This includes perturbations of the cost constraints on the input distribution as well as on the channel distribution. The framework is based on continuity of the capacity, which is shown for a class of perturbations in the cost constraint and the channel distribution. The continuity then forms the foundation for obtaining bounds on the capacity sensitivity. As an illustration, the capacity sensitivity bound is applied to obtain scaling laws when the support of additive α-stable noise is truncated.
international conference on acoustics, speech, and signal processing | 2017
Loris Cannelli; Francisco Facchinei; Vyacheslav Kungurtsev; Gesualdo Scutari
We propose a novel parallel asynchronous algorithmic framework for the minimization of the sum of a smooth (nonconvex) function and a convex (nonsmooth) regularizer. The framework hinges on Successive Convex Approximation (SCA) techniques and on a novel probabilistic model which describes in a unified way a variety of asynchronous settings in a more faithful and exhaustive way with respect to state-of-the-art models. Key features of our framework are: i) it accommodates inconsistent read, meaning that components of the variables may be written by some cores while being simultaneously read by others; ii) it covers in a unified way several existing methods; and iii) it accommodates a variety of parallel computing architectures. Almost sure convergence to stationary solutions is proved for the general case, and iteration complexity analysis is given for a specific version of our model. Numerical results show that our scheme outperforms existing asynchronous ones.
Archive | 2017
Vyacheslav Kungurtsev; Wim Michiels; Moritz Diehl
We consider a problem of eigenvalue optimization, in particular finding a local minimizer of the spectral abscissa—the value of a parameter that results in the smallest value of the largest real part of the spectrum of a system. This is an important problem for the stabilization of control systems, but it is difficult to solve because the underlying objective function is typically nonconvex, nonsmooth, and non-Lipschitz. We present an expanded sequential linear and quadratic programming algorithm that solves a series of linear or quadratic subproblems formed by linearizing, with respect to the parameters, a set of right-most eigenvalues at each point as well as historical information at nearby points. We present a comparison of the performance of this algorithm with the state of the art in the field.
asilomar conference on signals, systems and computers | 2016
Loris Cannelli; Gesualdo Scutari; Francisco Facchinei; Vyacheslav Kungurtsev
We propose a novel parallel asynchronous lock-free algorithmic framework for the minimization of the sum of a smooth nonconvex function and a convex nonsmooth regularizer. This class of problems arises in many big-data applications, including deep learning, matrix completions, and tensor factorization. Key features of the proposed algorithm are: i) it deals with nonconvex objective functions; ii) it is parallel and asynchronous; and iii) it is lock-free, meaning that components of the vector variables may be written by some cores while being simultaneously read by others. Almost sure convergence to stationary solutions is proved. The method enjoys properties that improve to a great extent over current ones and numerical results show that it outperforms existing asynchronous algorithms on both convex and nonconvex problems.
european control conference | 2014
Vyacheslav Kungurtsev; Attila Kozma; Moritz Diehl
Distributed multiple shooting is a modification of the multiple shooting approach for discretizing optimal control problems wherein the separate components of a large-scale system are discretized as well as shooting time intervals. In an SQP algorithm that solves the resulting discretized nonlinear program, the adjoint based version of the algorithm additionally discards certain derivatives appearing in the resulting quadratic programs in order to lead to computational savings in sensitivity generation and solving the QP. It was conjectured that adjoint-based distributed multiple shooting behaves like an inexact SQP method and converges linearly to the optimal solution, provided that the discarded derivatives are sufficiently uninfluential in the total dynamics of the system. This paper confirms this conjecture theoretically by providing the appropriate convergence theory, as well as numerically, by analyzing the convergence properties of the algorithm as applied to a problem involving detection of the source of smoke within a set of rooms.
Ima Journal of Numerical Analysis | 2017
Philip E. Gill; Vyacheslav Kungurtsev; Daniel P. Robinson