Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marco Sciandrone is active.

Publication


Featured researches published by Marco Sciandrone.


Operations Research Letters | 2000

On the convergence of the block nonlinear Gauss-Seidel method under convex constraints

Luigi Grippo; Marco Sciandrone

We give new convergence results for the block Gauss-Seidel method for problems where the feasible set is the Cartesian product of m closed convex sets, under the assumption that the sequence generated by the method has limit points. We show that the method is globally convergent for m=2 and that for m>2 convergence can be established both when the objective function f is componentwise strictly quasiconvex with respect to m-2 components and when f is pseudoconvex. Finally, we consider a proximal point modification of the method and we state convergence results without any convexity assumption on the objective function.


IEEE Transactions on Biomedical Engineering | 2010

Real-Time Epileptic Seizure Prediction Using AR Models and Support Vector Machines

Luigi Chisci; Antonio Mavino; Guido Perferi; Marco Sciandrone; Carmelo Anile; Gabriella Colicchio; Filomena Fuggetta

This paper addresses the prediction of epileptic seizures from the online analysis of EEG data. This problem is of paramount importance for the realization of monitoring/control units to be implanted on drug-resistant epileptic patients. The proposed solution relies in a novel way on autoregressive modeling of the EEG time series and combines a least-squares parameter estimator for EEG feature extraction along with a support vector machine (SVM) for binary classification between preictal/ictal and interictal states. This choice is characterized by low computational requirements compatible with a real-time implementation of the overall system. Moreover, experimental results on the Freiburg dataset exhibited correct prediction of all seizures (100 % sensitivity) and, due to a novel regularization of the SVM classifier based on the Kalman filter, also a low false alarm rate.


Computational Optimization and Applications | 2002

Nonmonotone Globalization Techniques for the Barzilai-Borwein Gradient Method

Luigi Grippo; Marco Sciandrone

In this paper we propose new globalization strategies for the Barzilai and Borwein gradient method, based on suitable relaxations of the monotonicity requirements. In particular, we define a class of algorithms that combine nonmonotone watchdog techniques with nonmonotone linesearch rules and we prove the global convergence of these schemes. Then we perform an extensive computational study, which shows the effectiveness of the proposed approach in the solution of large dimensional unconstrained optimization problems.


Siam Journal on Optimization | 2002

On the Global Convergence of Derivative-Free Methods for Unconstrained Optimization

Stefano Lucidi; Marco Sciandrone

In this paper, starting from the study of the common elements that some globally convergent direct search methods share, a general convergence theory is established for unconstrained minimization methods employing only function values. The introduced convergence conditions are useful for developing and analyzing new derivative-free algorithms with guaranteed global convergence. As examples, we describe three new algorithms which combine pattern and line search approaches.


Optimization Methods & Software | 1999

Globally convergent block-coordinate techniques for unconstrained optimization

Luigi Grippof; Marco Sciandrone

In this paper we define new classes of globally convergent block-coordinate techniques for the unconstrained minimization of a continuously differentiable function. More specifically, we first describe conceptual models of decomposition algorithms based on the interconnection of elementary operations performed on the block components of the variable vector. Then we characterize the elementary operations defined through a suitable line search or the global minimization in a component subspace. Using these models, we establish new results on the convergence of the nonlinear Gauss–Seidel method and we prove that this method with a two-block decomposition is globally convergent towards stationary points, even in the absence of convexity or uniqueness assumptions. In the general case of nonconvex objective function and arbitrary decomposition we define new globally convergent line-search-based schemes that may also include partial global inimizations with respect to some component. Computational aspects are di...


Mathematical Programming | 2002

Objective-derivative-free methods for constrained optimization

Stefano Lucidi; Marco Sciandrone; Paul Tseng

Abstract.We propose feasible descent methods for constrained minimization that do not make explicit use of the derivative of the objective function. The methods iteratively sample the objective function value along a finite set of feasible search arcs and decrease the sampling stepsize if an improved objective function value is not sampled. The search arcs are obtained by projecting search direction rays onto the feasible set and the search directions are chosen such that a subset approximately generates the cone of first-order feasible variations at the current iterate. We show that these methods have desirable convergence properties under certain regularity assumptions on the constraints. In the case of linear constraints, the projections are redundant and the regularity assumptions hold automatically. Numerical experience with the methods in the linearly constrained case is reported.


Computational Optimization and Applications | 2002

A Derivative-Free Algorithm for Bound Constrained Optimization

Stefano Lucidi; Marco Sciandrone

In this work, we propose a new globally convergent derivative-free algorithm for the minimization of a continuously differentiable function in the case that some of (or all) the variables are bounded. This algorithm investigates the local behaviour of the objective function on the feasible set by sampling it along the coordinate directions. Whenever a “suitable” descent feasible coordinate direction is detected a new point is produced by performing a linesearch along this direction. The information progressively obtained during the iterates of the algorithm can be used to build an approximation model of the objective function. The minimum of such a model is accepted if it produces an improvement of the objective function value. We also derive a bound for the limit accuracy of the algorithm in the minimization of noisy functions. Finally, we report the results of a preliminary numerical experience.


Computational Optimization and Applications | 2011

Decomposition algorithms for generalized potential games

Francisco Facchinei; Veronica Piccialli; Marco Sciandrone

We analyze some new decomposition schemes for the solution of generalized Nash equilibrium problems. We prove convergence for a particular class of generalized potential games that includes some interesting engineering problems. We show that some versions of our algorithms can deal also with problems lacking any convexity and consider separately the case of two players for which stronger results can be obtained.


Siam Journal on Optimization | 2010

Sequential Penalty Derivative-Free Methods for Nonlinear Constrained Optimization

Giampaolo Liuzzi; Stefano Lucidi; Marco Sciandrone

We consider the problem of minimizing a continuously differentiable function of several variables subject to smooth nonlinear constraints. We assume that the first order derivatives of the objective function and of the constraints can be neither calculated nor explicitly approximated. Hence, every minimization procedure must use only a suitable sampling of the problem functions. These problems arise in many industrial and scientific applications, and this motivates the increasing interest in studying derivative-free methods for their solution. The aim of the paper is to extend to a derivative-free context a sequential penalty approach for nonlinear programming. This approach consists in solving the original problem by a sequence of approximate minimizations of a merit function where penalization of constraint violation is progressively increased. In particular, under some standard assumptions, we introduce a general theoretical result regarding the connections between the sampling technique and the updating of the penalization which are able to guarantee convergence to stationary points of the constrained problem. On the basis of the general theoretical result, we propose a new method and prove its convergence to stationary points of the constrained problem. The computational behavior of the method has been evaluated both on a set of test problems and on a real application. The obtained results and the comparison with other well-known derivative-free software show the viability of the proposed sequential penalty approach.


Computational Optimization and Applications | 2010

Concave programming for minimizing the zero-norm over polyhedral sets

Francesco Rinaldi; Fabio Schoen; Marco Sciandrone

Given a non empty polyhedral set, we consider the problem of finding a vector belonging to it and having the minimum number of nonzero components, i.e., a feasible vector with minimum zero-norm. This combinatorial optimization problem is NP-Hard and arises in various fields such as machine learning, pattern recognition, signal processing. One of the contributions of this paper is to propose two new smooth approximations of the zero-norm function, where the approximating functions are separable and concave. In this paper we first formally prove the equivalence between the approximating problems and the original nonsmooth problem. To this aim, we preliminarily state in a general setting theoretical conditions sufficient to guarantee the equivalence between pairs of problems. Moreover we also define an effective and efficient version of the Frank-Wolfe algorithm for the minimization of concave separable functions over polyhedral sets in which variables which are null at an iteration are eliminated for all the following ones, with significant savings in computational time, and we prove the global convergence of the method. Finally, we report the numerical results on test problems showing both the usefulness of the new concave formulations and the efficiency in terms of computational time of the implemented minimization algorithm.

Collaboration


Dive into the Marco Sciandrone's collaboration.

Top Co-Authors

Avatar

Luigi Grippo

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar

Stefano Lucidi

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar

Giampaolo Liuzzi

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Laura Palagi

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar

Veronica Piccialli

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar

Arnaldo Risi

National Research Council

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge