Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrea Cassioli is active.

Publication


Featured researches published by Andrea Cassioli.


Computational Optimization and Applications | 2012

Machine learning for global optimization

Andrea Cassioli; David Di Lorenzo; Marco Locatelli; Fabio Schoen; Marco Sciandrone

In this paper we introduce the LeGO (Learning for Global Optimization) approach for global optimization in which machine learning is used to predict the outcome of a computationally expensive global optimization run, based upon a suitable training performed by standard runs of the same global optimization method. We propose to use a Support Vector Machine (although different machine learning tools might be employed) to learn the relationship between the starting point of an algorithm and the final outcome (which is usually related to the function value at the point returned by the procedure). Numerical experiments performed both on classical test functions and on difficult space trajectory planning problems show that the proposed approach can be very effective in identifying good starting points for global optimization.


Optimization Methods & Software | 2009

Global optimization of binary Lennard-Jones clusters

Andrea Cassioli; Marco Locatelli; Fabio Schoen

In this paper we present our experience with the optimization of atomic clusters under the binary Lennard–Jones potential. This is a generalization of the single atom type Lennard–Jones model to the case in which atoms of two different types (and ‘sizes’) interact within the same cluster. This problem has a combinatorial structure which increases complexity and requires strategies to be revised in order to take into account such new aspects. Our approach has been a very effective one: we have been able not only to confirm most putative optima listed in the Cambridge Cluster Database, but also to find 95 improved solutions.


BMC Bioinformatics | 2015

An algorithm to enumerate all possible protein conformations verifying a set of distance constraints

Andrea Cassioli; Benjamin Bardiaux; Guillaume Bouvier; Antonio Mucherino; Rafael Alves; Leo Liberti; Michael Nilges; Carlile Lavor; Thérèse E. Malliavin

BackgroundThe determination of protein structures satisfying distance constraints is an important problem in structural biology. Whereas the most common method currently employed is simulated annealing, there have been other methods previously proposed in the literature. Most of them, however, are designed to find one solution only.ResultsIn order to explore exhaustively the feasible conformational space, we propose here an interval Branch-and-Prune algorithm (iBP) to solve the Distance Geometry Problem (DGP) associated to protein structure determination. This algorithm is based on a discretization of the problem obtained by recursively constructing a search space having the structure of a tree, and by verifying whether the generated atomic positions are feasible or not by making use of pruning devices. The pruning devices used here are directly related to features of protein conformations.ConclusionsWe described the new algorithm iBP to generate protein conformations satisfying distance constraints, that would potentially allows a systematic exploration of the conformational space. The algorithm iBP has been applied on three α-helical peptides.


Computational Optimization and Applications | 2010

Dissimilarity measures for population-based global optimization algorithms

Andrea Cassioli; Marco Locatelli; Fabio Schoen

Very hard optimization problems, i.e., problems with a large number of variables and local minima, have been effectively attacked with algorithms which mix local searches with heuristic procedures in order to widely explore the search space. A Population Based Approach based on a Monotonic Basin Hopping optimization algorithm has turned out to be very effective for this kind of problems. In the resulting algorithm, called Population Basin Hopping, a key role is played by a dissimilarity measure. The basic idea is to maintain a sufficient dissimilarity gap among the individuals in the population in order to explore a wide part of the solution space.The aim of this paper is to study and computationally compare different dissimilarity measures to be used in the field of Molecular Cluster Optimization, exploring different possibilities fitting with the problem characteristics. Several dissimilarities, mainly based on pairwise distances between cluster elements, are introduced and tested. Each dissimilarity measure is defined as a distance between cluster descriptors, which are suitable representations of cluster information which can be extracted during the optimization process.It will be shown that, although there is no single dissimilarity measure which dominates the others, from one side it is extremely beneficial to introduce dissimilarities and from another side it is possible to identify a group of dissimilarity criteria which guarantees the best performance.


European Journal of Operational Research | 2013

On the convergence of inexact block coordinate descent methods for constrained optimization

Andrea Cassioli; David Di Lorenzo; Marco Sciandrone

We consider the problem of minimizing a smooth function over a feasible set defined as the Cartesian product of convex compact sets. We assume that the dimension of each factor set is huge, so we are interested in studying inexact block coordinate descent methods (possibly combined with column generation strategies). We define a general decomposition framework where different line search based methods can be embedded, and we state global convergence results. Specific decomposition methods based on gradient projection and Frank–Wolfe algorithms are derived from the proposed framework. The numerical results of computational experiments performed on network assignment problems are reported.


Journal of Global Optimization | 2013

Global optimization of expensive black box problems with a known lower bound

Andrea Cassioli; Fabio Schoen

In this paper we propose an algorithm for the global optimization of computationally expensive black–box functions. For this class of problems, no information, like e.g. the gradient, can be obtained and function evaluation is highly expensive. In many applications, however, a lower bound on the objective function is known; in this situation we derive a modified version of the algorithm introduced in Gutmann (J Glob Optim 19:201–227, 2001). Using this information produces a significant improvement in the quality of the resulting method, with only a small increase in the computational cost. Extensive computational results are provided which support this statement.


Computers & Operations Research | 2011

A heuristic approach for packing identical rectangles in convex regions

Andrea Cassioli; Marco Locatelli

In this paper we propose a heuristic approach for the problem of packing equal rectangles within a convex region. The approach is based on an Iterated Local Search scheme, in which the key step is the perturbation move. Different perturbation moves, both combinatorial and continuous ones, are proposed and compared through extensive computational experiments on a set of test instances. The overall results are quite encouraging.


Archive | 2012

Global Optimization Approaches for Optimal Trajectory Planning

Andrea Cassioli; Dario Izzo; David Di Lorenzo; Marco Locatelli; Fabio Schoen

Optimal trajectory design for interplanetary space missions is an extremely hard problem, mostly because of the very large number of local minimizers that real problems present. Despite the challenges of the task, it is possible, in the preliminary phase, to design low-cost high-energy trajectories with little or no human supervision. In many cases, the discovered paths are as cheap, or even cheaper, as the ones found by experts through lengthy and difficult processes. More interestingly, many of the tricks that experts used to design the trajectories, like, e.g., traveling along an orbit in fractional resonance with a given planet, naturally emerge from the computed solutions, despite neither the model nor the solver have been explicitly designed in order to exploit such knowledge. In this chapter we will analyze the modelling techniques that computational experiments have shown to be most successful, along with some of the algorithms that might be used to solve such problems.


Optimization Letters | 2009

A convergent decomposition method for box-constrained optimization problems

Andrea Cassioli; Marco Sciandrone

In this work we consider the problem of minimizing a continuously differentiable function over a feasible set defined by box constraints. We present a decomposition method based on the solution of a sequence of subproblems. In particular, we state conditions on the rule for selecting the subproblem variables sufficient to ensure the global convergence of the generated sequence without convexity assumptions. The conditions require to select suitable variables (related to the violation of the optimality conditions) to guarantee theoretical convergence properties, and leave the degree of freedom of selecting any other group of variables to accelerate the convergence.


European Journal of Operational Research | 2013

An incremental least squares algorithm for large scale linear classification

Andrea Cassioli; A. Chiavaioli; Costanzo Manes; Marco Sciandrone

In this work we consider the problem of training a linear classifier by assuming that the number of data is huge (in particular, data may be larger than the memory capacity). We propose to adopt a linear least-squares formulation of the problem and an incremental recursive algorithm which requires to store a square matrix (whose dimension is equal to the number of features of the data). The algorithm (very simple to implement) converges to the solution using each training data once, so that it effectively handles possible memory issues and is a viable method for linear large scale classification and for real time applications, provided that the number of features of the data is not too large (say of the order of thousands). The extensive computational experiments show that the proposed algorithm is at least competitive with the state-of-the-art algorithms for large scale linear classification.

Collaboration


Dive into the Andrea Cassioli's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Carlile Lavor

Rio de Janeiro State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Antonio Mucherino

Institut de Recherche en Informatique et Systèmes Aléatoires

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge