Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andreas Themelis is active.

Publication


Featured researches published by Andreas Themelis.


Computational Optimization and Applications | 2017

Forward---backward quasi-Newton methods for nonsmooth optimization problems

Lorenzo Stella; Andreas Themelis; Panagiotis Patrinos

The forward–backward splitting method (FBS) for minimizing a nonsmooth composite function can be interpreted as a (variable-metric) gradient method over a continuously differentiable function which we call forward–backward envelope (FBE). This allows to extend algorithms for smooth unconstrained optimization and apply them to nonsmooth (possibly constrained) problems. Since the FBE can be computed by simply evaluating forward–backward steps, the resulting methods rely on a similar black-box oracle as FBS. We propose an algorithmic scheme that enjoys the same global convergence properties of FBS when the problem is convex, or when the objective function possesses the Kurdyka–Łojasiewicz property at its critical points. Moreover, when using quasi-Newton directions the proposed method achieves superlinear convergence provided that usual second-order sufficiency conditions on the FBE hold at the limit point of the generated sequence. Such conditions translate into milder requirements on the original function involving generalized second-order differentiability. We show that BFGS fits our framework and that the limited-memory variant L-BFGS is well suited for large-scale problems, greatly outperforming FBS or its accelerated version in practice, as well as ADMM and other problem-specific solvers. The analysis of superlinear convergence is based on an extension of the Dennis and Moré theorem for the proposed algorithmic scheme.


european control conference | 2016

Stochastic gradient methods for stochastic model predictive control

Andreas Themelis; Silvia Villa; Panagiotis Patrinos; Alberto Bemporad

We introduce a new stochastic gradient algorithm, SAAGA, and investigate its employment for solving Stochastic MPC problems and multi-stage stochastic optimization programs in general. The method is particularly attractive for scenario-based formulations that involve a large number of scenarios, for which “batch” formulations may become inefficient due to high computational costs. Benefits of the method include cheap computations per iteration and fast convergence due to the sparsity of the proposed problem decomposition.


european signal processing conference | 2017

A primal-dual line search method and applications in image processing

Pantelis Sopasakis; Andreas Themelis; Johan A. K. Suykens; Panagiotis Patrinos

Operator splitting algorithms are enjoying wide acceptance in signal processing for their ability to solve generic convex optimization problems exploiting their structure and leading to efficient implementations. These algorithms are instances of the Krasnoselskil-Mann scheme for finding fixed points of averaged operators. Despite their popularity, however, operator splitting algorithms are sensitive to ill conditioning and often converge slowly. In this paper we propose a line search primal-dual method to accelerate and robustify the Chambolle-Pock algorithm based on SuperMann: a recent extension of the Kras-noselskil-Mann algorithmic scheme. We discuss the convergence properties of this new algorithm and we showcase its strengths on the problem of image denoising using the anisotropic total variation regularization.


Siam Journal on Optimization | 2018

Forward-backward envelope for the sum of two nonconvex functions : further properties and nonmonotone line-search algorithms

Andreas Themelis; Lorenzo Stella; Panagiotis Patrinos


arXiv: Optimization and Control | 2017

Douglas-Rachford splitting and ADMM for nonconvex optimization: tight convergence results

Andreas Themelis; Panagiotis Patrinos


arXiv: Optimization and Control | 2016

SuperMann : a superlinearly convergent algorithm for finding fixed points of nonexpansive operators

Andreas Themelis; Panagiotis Patrinos


conference on decision and control | 2017

A simple and efficient algorithm for nonlinear model predictive control

Lorenzo Stella; Andreas Themelis; Pantelis Sopasakis; Panagiotis Patrinos


IEEE Transactions on Automatic Control | 2018

Newton-type alternating minimization algorithm for convex optimization

Lorenzo Stella; Andreas Themelis; Panagiotis Patrinos


Archive | 2017

Douglas-Rachford splitting and ADMM for nonconvex optimization: new convergence results and accelerated versions

Andreas Themelis; Lorenzo Stella; Panagiotis Patrinos


Archive | 2016

A forward-backward quasi-Newton algorithm for minimizing the sum of two nonconvex functions

Panos Patrinos; Andreas Themelis; Lorenzo Stella

Collaboration


Dive into the Andreas Themelis's collaboration.

Top Co-Authors

Avatar

Panagiotis Patrinos

IMT Institute for Advanced Studies Lucca

View shared research outputs
Top Co-Authors

Avatar

Lorenzo Stella

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Alberto Bemporad

IMT Institute for Advanced Studies Lucca

View shared research outputs
Top Co-Authors

Avatar

Pantelis Sopasakis

IMT Institute for Advanced Studies Lucca

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Johan A. K. Suykens

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Panagiotis Patrinos

IMT Institute for Advanced Studies Lucca

View shared research outputs
Researchain Logo
Decentralizing Knowledge