Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Simone Sagratella is active.

Publication


Featured researches published by Simone Sagratella.


IEEE Transactions on Signal Processing | 2015

Parallel Selective Algorithms for Nonconvex Big Data Optimization

Francisco Facchinei; Gesualdo Scutari; Simone Sagratella

We propose a decomposition framework for the parallel optimization of the sum of a differentiable (possibly nonconvex) function and a (block) separable nonsmooth, convex one. The latter term is usually employed to enforce structure in the solution, typically sparsity. Our framework is very flexible and includes both fully parallel Jacobi schemes and Gauss-Seidel (i.e., sequential) ones, as well as virtually all possibilities “in between” with only a subset of variables updated at each iteration. Our theoretical convergence results improve on existing ones, and numerical results on LASSO, logistic regression, and some nonconvex quadratic problems show that the new method consistently outperforms existing algorithms.We propose a decomposition framework for the parallel optimization of the sum of a differentiable function and a (block) separable nonsmooth, convex one. The latter term i s usually employed to enforce structure in the solution, typi cally sparsity. Our framework is very flexible and includes both fully parallel Jacobi schemes and Gauss-Seidel (i.e., sequentia l) ones, as well as virtually all possibilities “in between” with only a subset of variables updated at each iteration. Our theoreti cal convergence results improve on existing ones, and numerica l results on LASSO and logistic regression problems show that the new method consistently outperforms existing algorithms.


arXiv: Distributed, Parallel, and Cluster Computing | 2014

Parallel Selective Algorithms for Big Data Optimization

Francisco Facchinei; Gesualdo Scutari; Simone Sagratella

We propose a decomposition framework for the parallel optimization of the sum of a differentiable (possibly nonconvex) function and a (block) separable nonsmooth, convex one. The latter term is usually employed to enforce structure in the solution, typically sparsity. Our framework is very flexible and includes both fully parallel Jacobi schemes and Gauss-Seidel (i.e., sequential) ones, as well as virtually all possibilities “in between” with only a subset of variables updated at each iteration. Our theoretical convergence results improve on existing ones, and numerical results on LASSO, logistic regression, and some nonconvex quadratic problems show that the new method consistently outperforms existing algorithms.We propose a decomposition framework for the parallel optimization of the sum of a differentiable function and a (block) separable nonsmooth, convex one. The latter term i s usually employed to enforce structure in the solution, typi cally sparsity. Our framework is very flexible and includes both fully parallel Jacobi schemes and Gauss-Seidel (i.e., sequentia l) ones, as well as virtually all possibilities “in between” with only a subset of variables updated at each iteration. Our theoreti cal convergence results improve on existing ones, and numerica l results on LASSO and logistic regression problems show that the new method consistently outperforms existing algorithms.


Siam Journal on Optimization | 2011

On the solution of the KKT conditions of generalized Nash equilibrium problems

Axel Dreves; Francisco Facchinei; Christian Kanzow; Simone Sagratella

We consider the solution of generalized Nash equilibrium problems by concatenating the KKT optimality conditions of each player’s optimization problem into a single KKT-like system. We then propose two approaches for solving this KKT system. The first approach is rather simple and uses a merit-function/equation-based technique for the solution of the KKT system. The second approach, partially motivated by the shortcomings of the first one, is an interior-point-based method. We show that this second approach has strong theoretical properties and, in particular, that it is possible to establish global convergence under sensible conditions, this probably being the first result of its kind in the literature. We discuss the results of an extensive numerical testing on four KKT-based solution algorithms, showing that the new interior-point method is efficient and very robust.


Mathematical Programming | 2014

Solving quasi-variational inequalities via their KKT conditions

Francisco Facchinei; Christian Kanzow; Simone Sagratella

We propose to solve a general quasi-variational inequality by using its Karush–Kuhn–Tucker conditions. To this end we use a globally convergent algorithm based on a potential reduction approach. We establish global convergence results for many interesting instances of quasi-variational inequalities, vastly broadening the class of problems that can be solved with theoretical guarantees. Our numerical testings are very promising and show the practical viability of the approach.


international conference on acoustics, speech, and signal processing | 2014

Flexible parallel algorithms for big data optimization

Francisco Facchinei; Simone Sagratella; Gesualdo Scutari

We propose a decomposition framework for the parallel optimization of the sum of a differentiable function and a (block) separable nonsmooth, convex one. The latter term is typically used to enforce structure in the solution as, for example, in LASSO problems. Our framework is very flexible and includes both fully parallel Jacobi schemes and Gauss-Seidel (Southwell-type) ones, as well as virtually all possibilities in between (e.g., gradient- or Newton-type methods) with only a subset of variables updated at each iteration. Our theoretical convergence results improve on existing ones, and numerical results show that the new method compares favorably to existing algorithms.


Optimization Letters | 2011

On the computation of all solutions of jointly convex generalized Nash equilibrium problems

Francisco Facchinei; Simone Sagratella

Jointly convex generalized Nash equilibrium problems are the most studied class of generalized Nash equilibrium problems. For this class of problems it is now clear that a special solution, called variational or normalized equilibrium, can be computed by solving a variational inequality. However, the computation of non-variational equilibria is more complex and less understood and only very few methods have been proposed so far. In this note we consider a new approach for the computation of non-variational solutions of jointly convex problems and compare our approach to previous proposals.


Journal of Global Optimization | 2016

A canonical duality approach for the solution of affine quasi-variational inequalities

Vittorio Latorre; Simone Sagratella

We propose a new formulation of the Karush–Kunt–Tucker conditions of a particular class of quasi-variational inequalities. In order to reformulate the problem we use the Fisher–Burmeister complementarity function and canonical duality theory. We establish the conditions for a critical point of the new formulation to be a solution of the original quasi-variational inequality showing the potentiality of such approach in solving this class of problems. We test the obtained theoretical results with a simple heuristic that is demonstrated on several problems coming from the academy and various engineering applications.


Computational Optimization and Applications | 2015

The semismooth Newton method for the solution of quasi-variational inequalities

Francisco Facchinei; Christian Kanzow; Sebastian Karl; Simone Sagratella

We consider the application of the globalized semismooth Newton method to the solution of (the KKT conditions of) quasi variational inequalities. We show that the method is globally and locally superlinearly convergent for some important classes of quasi variational inequality problems. We report numerical results to illustrate the practical behavior of the method.


Cybernetics and Information Technologies | 2012

Combining Local and Global Direct Derivative-Free Optimization for Reinforcement Learning

Matteo Leonetti; Petar Kormushev; Simone Sagratella

Abstract We consider the problem of optimization in policy space for reinforcement learning. While a plethora of methods have been applied to this problem, only a narrow category of them proved feasible in robotics. We consider the peculiar characteristics of reinforcement learning in robotics, and devise a combination of two algorithms from the literature of derivative-free optimization. The proposed combination is well suited for robotics, as it involves both off-line learning in simulation and on-line learning in the real environment. We demonstrate our approach on a real-world task, where an Autonomous Underwater Vehicle has to survey a target area under potentially unknown environment conditions. We start from a given controller, which can perform the task under foreseeable conditions, and make it adaptive to the actual environment.


Mathematical Methods of Operations Research | 2017

Sufficient conditions to compute any solution of a quasivariational inequality via a variational inequality

Didier Aussel; Simone Sagratella

We define the concept of reproducible map and show that, whenever the constraint map defining the quasivariational inequality (QVI) is reproducible then one can characterize the whole solution set of the QVI as a union of solution sets of some variational inequalities (VI). By exploiting this property, we give sufficient conditions to compute any solution of a generalized Nash equilibrium problem (GNEP) by solving a suitable VI. Finally, we define the class of pseudo-Nash equilibrium problems, which are (not necessarily convex) GNEPs whose solutions can be computed by solving suitable Nash equilibrium problems.

Collaboration


Dive into the Simone Sagratella's collaboration.

Top Co-Authors

Avatar

Daniela Iacoviello

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar

Francisco Facchinei

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Silvia Canale

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar

Alberto De Santis

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar

Fiora Pirri

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Laura Palagi

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar

Vittorio Latorre

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

A. De Santis

Sapienza University of Rome

View shared research outputs
Researchain Logo
Decentralizing Knowledge