Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Giovanni Fasano is active.

Publication


Featured researches published by Giovanni Fasano.


III European Conference On Computational Mechanics - Solids, Structures and Coupled Problems in Engi | 2006

Particle Swarm Optimization: efficient globally convergent modifications

Emilio F. Campana; Giovanni Fasano; Daniele Peri; Antonio Pinto

In this paper we consider the Particle Swarm Optimization (PSO) algorithm [1], [2], in the class of Evolutionary Algorithms, for the solution of global optimization problems. We analyze a couple of issues aiming at improving both the effectiveness and the efficiency of PSO. In particular, first we recognize that in accordance with the results in [3], the initial points configuration required by the method, may be a crucial issue for the efficiency of PSO iteration. Therefore, a promising strategy to generate initial points is provided in the paper.


Journal of Global Optimization | 2010

Dynamic analysis for the selection of parameters and initial population, in particle swarm optimization

Emilio F. Campana; Giovanni Fasano; Antonio Pinto

In this paper we consider the evolutionary Particle Swarm Optimization (PSO) algorithm, for the minimization of a computationally costly nonlinear function, in global optimization frameworks. We study a reformulation of the standard iteration of PSO (Clerc and Kennedy in IEEE Trans Evol Comput 6(1) 2002), (Kennedy and Eberhart in IEEE Service Center, Piscataway, IV: 1942–1948, 1995) into a linear dynamic system. We carry out our analysis on a generalized PSO iteration, which includes the standard one proposed in the literature. We analyze three issues for the resulting generalized PSO: first, for any particle we give both theoretical and numerical evidence on an efficient choice of the starting point. Then, we study the cases in which either deterministic and uniformly randomly distributed coefficients are considered in the scheme. Finally, some convergence analysis is also provided, along with some necessary conditions to avoid diverging trajectories. The results proved in the paper can be immediately applied to the standard PSO iteration.


Optimization Methods & Software | 2009

On the geometry phase in model-based algorithms for derivative-free optimization

Giovanni Fasano; José Luis Morales; Jorge Nocedal

A numerical study of model-based methods for derivative-free optimization is presented. These methods typically include a geometry phase whose goal is to ensure the adequacy of the interpolation set. The paper studies the performance of an algorithm that dispenses with the geometry phase altogether (and therefore does not attempt to control the position of the interpolation set). Data are presented describing the evolution of the condition number of the interpolation matrix and the accuracy of the gradient estimate. The experiments are performed on smooth unconstrained optimization problems with dimensions ranging between 2 and 15.


Applied Mathematics and Computation | 2013

Particle Swarm Optimization with non-smooth penalty reformulation, for a complex portfolio selection problem

Marco Corazza; Giovanni Fasano; Riccardo Gusso

In the classical model for portfolio selection the risk is measured by the variance of returns. It is well known that, if returns are not elliptically distributed, this may cause inaccurate investment decisions. To address this issue, several alternative measures of risk have been proposed. In this contribution we focus on a class of measures that uses information contained both in lower and in upper tail of the distribution of the returns. We consider a nonlinear mixed-integer portfolio selection model which takes into account several constraints used in fund management practice. The latter problem is NP-hard in general, and exact algorithms for its minimization, which are both effective and efficient, are still sought at present. Thus, to approximately solve this model we experience the heuristics Particle Swarm Optimization (PSO). Since PSO was originally conceived for unconstrained global optimization problems, we apply it to a novel reformulation of our mixed-integer model, where a standard exact penalty function is introduced.


Applied Soft Computing | 2016

Parameter selection in synchronous and asynchronous deterministic particle swarm optimization for ship hydrodynamics problems

Andrea Serani; Cecilia Leotardi; Umberto Iemma; Emilio F. Campana; Giovanni Fasano; Matteo Diez

Graphical abstractDisplay Omitted HighlightsParametric study of deterministic PSO setting under limited computational resources.Comparison of synchronous and asynchronous implementations.Identification of most significant parameter based on more than 40k optimizations.Identification of most promising and robust setup for simulation-based problems.Hydrodynamic hull-form optimization of a high speed catamaran. Deterministic optimization algorithms are very attractive when the objective function is computationally expensive and therefore the statistical analysis of the optimization outcomes becomes too expensive. Among deterministic methods, deterministic particle swarm optimization (DPSO) has several attractive characteristics such as the simplicity of the heuristics, the ease of implementation, and its often fairly remarkable effectiveness. The performances of DPSO depend on four main setting parameters: the number of swarm particles, their initialization, the set of coefficients defining the swarm behavior, and (for box-constrained optimization) the method to handle the box constraints. Here, a parametric study of DPSO is presented, with application to simulation-based design in ship hydrodynamics. The objective is the identification of the most promising setup for both synchronous and asynchronous implementations of DPSO. The analysis is performed under the assumption of limited computational resources and large computational burden of the objective function evaluation. The analysis is conducted using 100 analytical test functions (with dimensionality from two to fifty) and three performance criteria, varying the swarm size, initialization, coefficients, and the method for the box constraints, resulting in more than 40,000 optimizations. The most promising setup is applied to the hull-form optimization of a high speed catamaran, for resistance reduction in calm water and at fixed speed, using a potential-flow solver.


Computational Optimization and Applications | 2006

A Truncated Nonmonotone Gauss-Newton Method for Large-Scale Nonlinear Least-Squares Problems

Giovanni Fasano; Francesco Lampariello; Marco Sciandrone

In this paper, a Gauss-Newton method is proposed for the solution of large-scale nonlinear least-squares problems, by introducing a truncation strategy in the method presented in [9]. First, sufficient conditions are established for ensuring the convergence of an iterative method employing a truncation scheme for computing the search direction, as approximate solution of a Gauss-Newton type equation. Then, a specific truncated Gauss-Newton algorithm is described, whose global convergence is ensured under standard assumptions, together with the superlinear convergence rate in the zero-residual case. The results of a computational experimentation on a set of standard test problems are reported.


Computational Optimization and Applications | 2013

Preconditioning Newton---Krylov methods in nonconvex large scale optimization

Giovanni Fasano; Massimo Roma

We consider an iterative preconditioning technique for non-convex large scale optimization. First, we refer to the solution of large scale indefinite linear systems by using a Krylov subspace method, and describe the iterative construction of a preconditioner which does not involve matrices products or matrices storage. The set of directions generated by the Krylov subspace method is used, as by product, to provide an approximate inverse preconditioner. Then, we experience our preconditioner within Truncated Newton schemes for large scale unconstrained optimization, where we generalize the truncation rule by Nash–Sofer (Oper. Res. Lett. 9:219–221, 1990) to the indefinite case, too. We use a Krylov subspace method to both approximately solve the Newton equation and to construct the preconditioner to be used at the current outer iteration. An extensive numerical experience shows that the proposed preconditioning strategy, compared with the unpreconditioned strategy and PREQN (Morales and Nocedal in SIAM J. Optim. 10:1079–1096, 2000), may lead to a reduction of the overall inner iterations. Finally, we show that our proposal has some similarities with the Limited Memory Preconditioners (Gratton et al. in SIAM J. Optim. 21:912–935, 2011).


Optimization Letters | 2009

A nonmonotone truncated Newton–Krylov method exploiting negative curvature directions, for large scale unconstrained optimization

Giovanni Fasano; Stefano Lucidi

We propose a new truncated Newton method for large scale unconstrained optimization, where a Conjugate Gradient (CG)-based technique is adopted to solve Newton’s equation. In the current iteration, the Krylov method computes a pair of search directions: the first approximates the Newton step of the quadratic convex model, while the second is a suitable negative curvature direction. A test based on the quadratic model of the objective function is used to select the most promising between the two search directions. Both the latter selection rule and the CG stopping criterion for approximately solving Newton’s equation, strongly rely on conjugacy conditions. An appropriate linesearch technique is adopted for each search direction: a nonmonotone stabilization is used with the approximate Newton step, while an Armijo type linesearch is used for the negative curvature direction. The proposed algorithm is both globally and superlinearly convergent to stationary points satisfying second order necessary conditions. We carry out a significant numerical experience in order to test our proposal.


Studies in computational intelligence | 2015

Globally Convergent Hybridization of Particle Swarm Optimization Using Line Search-Based Derivative-Free Techniques

Andrea Serani; Matteo Diez; Emilio F. Campana; Giovanni Fasano; Daniele Peri; Umberto Iemma

The hybrid use of exact and heuristic derivative-free methods for global unconstrained optimization problems is presented. Many real-world problems are modeled by computationally expensive functions, such as problems in simulation-based design of complex engineering systems. Objective-function values are often provided by systems of partial differential equations, solved by computationally expensive black-box tools. The objective-function is likely noisy and its derivatives are often not available. On the one hand, the use of exact optimization methods might be computationally too expensive, especially if asymptotic convergence properties are sought. On the other hand, heuristic methods do not guarantee the stationarity of their final solutions. Nevertheless, heuristic methods are usually able to provide an approximate solution at a reasonable computational cost, and have been widely applied to real-world simulation-based design optimization problems. Herein, an overall hybrid algorithm combining the appealing properties of both exact and heuristic methods is discussed, with focus on Particle Swarm Optimization (PSO) and line search-based derivative-free algorithms. The theoretical properties of the hybrid algorithm are detailed, in terms of limit points stationarity. Numerical results are presented for a specific test function and for two real-world optimization problems in ship hydrodynamics.


Siam Journal on Optimization | 2014

A Linesearch-Based Derivative-Free Approach for Nonsmooth Constrained Optimization

Giovanni Fasano; Giampaolo Liuzzi; Stefano Lucidi; Francesco Rinaldi

In this paper, we propose new linesearch-based methods for nonsmooth constrained optimization problems when first-order information on the problem functions is not available. In the first part, we describe a general framework for bound-constrained problems and analyze its convergence toward stationary points, using the Clarke--Jahn directional derivative. In the second part, we consider inequality constrained optimization problems where both objective function and constraints can possibly be nonsmooth. In this case, we first split the constraints into two subsets: difficult general nonlinear constraints and simple bound constraints on the variables. Then, we use an exact penalty function to tackle the difficult constraints and we prove that the original problem can be reformulated as the bound-constrained minimization of the proposed exact penalty function. Finally, we use the framework developed for the bound-constrained case to solve the penalized problem. Moreover, we prove that every accumulation poin...

Collaboration


Dive into the Giovanni Fasano's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Massimo Roma

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matteo Diez

National Research Council

View shared research outputs
Top Co-Authors

Avatar

Andrea Serani

National Research Council

View shared research outputs
Top Co-Authors

Avatar

Stefano Lucidi

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Giampaolo Liuzzi

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar

Marco Corazza

Ca' Foscari University of Venice

View shared research outputs
Top Co-Authors

Avatar

Andrea Ellero

Ca' Foscari University of Venice

View shared research outputs
Researchain Logo
Decentralizing Knowledge