Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Giampaolo Liuzzi is active.

Publication


Featured researches published by Giampaolo Liuzzi.


IEEE Transactions on Magnetics | 2003

Multiobjective optimization techniques for the design of induction motors

Giampaolo Liuzzi; Stefano Lucidi; Francesco Parasiliti; Marco Villani

This paper deals with the optimization problem of induction motor design. In order to tackle all the conflicting goals that define the problem, the use of multiobjective optimization is investigated. The numerical results show that the approach is viable.


Siam Journal on Optimization | 2010

Sequential Penalty Derivative-Free Methods for Nonlinear Constrained Optimization

Giampaolo Liuzzi; Stefano Lucidi; Marco Sciandrone

We consider the problem of minimizing a continuously differentiable function of several variables subject to smooth nonlinear constraints. We assume that the first order derivatives of the objective function and of the constraints can be neither calculated nor explicitly approximated. Hence, every minimization procedure must use only a suitable sampling of the problem functions. These problems arise in many industrial and scientific applications, and this motivates the increasing interest in studying derivative-free methods for their solution. The aim of the paper is to extend to a derivative-free context a sequential penalty approach for nonlinear programming. This approach consists in solving the original problem by a sequence of approximate minimizations of a merit function where penalization of constraint violation is progressively increased. In particular, under some standard assumptions, we introduce a general theoretical result regarding the connections between the sampling technique and the updating of the penalization which are able to guarantee convergence to stationary points of the constrained problem. On the basis of the general theoretical result, we propose a new method and prove its convergence to stationary points of the constrained problem. The computational behavior of the method has been evaluated both on a set of test problems and on a real application. The obtained results and the comparison with other well-known derivative-free software show the viability of the proposed sequential penalty approach.


Computational Optimization and Applications | 2010

A DIRECT-based approach exploiting local minimizations for the solution of large-scale global optimization problems

Giampaolo Liuzzi; Stefano Lucidi; Veronica Piccialli

In this paper we propose a new algorithm for solving difficult large-scale global optimization problems. We draw our inspiration from the well-known DIRECT algorithm which, by exploiting the objective function behavior, produces a set of points that tries to cover the most interesting regions of the feasible set. Unfortunately, it is well-known that this strategy suffers when the dimension of the problem increases. As a first step we define a multi-start algorithm using DIRECT as a deterministic generator of starting points. Then, the new algorithm consists in repeatedly applying the previous multi-start algorithm on suitable modifications of the variable space that exploit the information gained during the optimization process. The efficiency of the new algorithm is pointed out by a consistent numerical experimentation involving both standard test problems and the optimization of Morse potential of molecular clusters.


Journal of Global Optimization | 2010

A partition-based global optimization algorithm

Giampaolo Liuzzi; Stefano Lucidi; Veronica Piccialli

This paper is devoted to the study of partition-based deterministic algorithms for global optimization of Lipschitz-continuous functions without requiring knowledge of the Lipschitz constant. First we introduce a general scheme of a partition-based algorithm. Then, we focus on the selection strategy in such a way to exploit the information on the objective function. We propose two strategies. The first one is based on the knowledge of the global optimum value of the objective function. In this case the selection strategy is able to guarantee convergence of every infinite sequence of trial points to global minimum points. The second one does not require any a priori knowledge on the objective function and tries to exploit information on the objective function gathered during progress of the algorithm. In this case, from a theoretical point of view, we can guarantee the so-called every-where dense convergence of the algorithm.


Siam Journal on Optimization | 2006

A Derivative-Free Algorithm for Linearly Constrained Finite Minimax Problems

Giampaolo Liuzzi; Stefano Lucidi; Marco Sciandrone

In this paper we propose a new derivative-free algorithm for linearly constrained finite minimax problems. Due to the nonsmoothness of this class of problems, standard derivative-free algorithms can locate only points which satisfy weak necessary optimality conditions. In this work we define a new derivative-free algorithm which is globally convergent toward standard stationary points of the finite minimax problem. To this end, we convert the original problem into a smooth one by using a smoothing technique based on the exponential penalty function of Kort and Bertsekas. This technique depends on a smoothing parameter which controls the approximation to the finite minimax problem. The proposed method is based on a sampling of the smooth function along a suitable search direction and on a particular updating rule for the smoothing parameter that depends on the sampling stepsize. Numerical results on a set of standard minimax test problems are reported.


Computational Optimization and Applications | 2012

Derivative-free methods for bound constrained mixed-integer optimization

Giampaolo Liuzzi; Stefano Lucidi; Francesco Rinaldi

We consider the problem of minimizing a continuously differentiable function of several variables subject to simple bound constraints where some of the variables are restricted to take integer values. We assume that the first order derivatives of the objective function can be neither calculated nor approximated explicitly. This class of mixed integer nonlinear optimization problems arises frequently in many industrial and scientific applications and this motivates the increasing interest in the study of derivative-free methods for their solution. The continuous variables are handled by a linesearch strategy whereas to tackle the discrete ones we employ a local search-type approach. We propose different algorithms which are characterized by the way the current iterate is updated and by the stationarity conditions satisfied by the limit points of the sequences they produce.


Siam Journal on Optimization | 2009

A Derivative-Free Algorithm for Inequality Constrained Nonlinear Programming via Smoothing of an

Giampaolo Liuzzi; Stefano Lucidi

In this paper we consider inequality constrained nonlinear optimization problems where the first order derivatives of the objective function and the constraints cannot be used. Our starting point is the possibility to transform the original constrained problem into an unconstrained or linearly constrained minimization of a nonsmooth exact penalty function. This approach shows two main difficulties: the first one is the nonsmoothness of this class of exact penalty functions which may cause derivative-free codes to converge to nonstationary points of the problem; the second one is the fact that the equivalence between stationary points of the constrained problem and those of the exact penalty function can only be stated when the penalty parameter is smaller than a threshold value which is not known a priori. In this paper we propose a derivative-free algorithm which overcomes the preceding difficulties and produces a sequence of points that admits a subsequence converging to a Karush-Kuhn-Tucker point of the constrained problem. In particular the proposed algorithm is based on a smoothing of the nondifferentiable exact penalty function and includes an updating rule which, after at most a finite number of updates, is able to determine a “right value” for the penalty parameter. Furthermore we present the results obtained on a real world problem concerning the estimation of parameters in an insulin-glucose model of the human body.


Mathematical Programming | 2004

\ell_\infty

Giampaolo Liuzzi; Stefano Lucidi; Veronica Piccialli; Antonello Sotgiu

Abstract.In this paper we are concerned with the design of a small low-cost, low-field multipolar magnet for Magnetic Resonance Imaging with a high field uniformity. By introducing appropriate variables, the considered design problem is converted into a global optimization one. This latter problem is solved by means of a new derivative free global optimization method which is a distributed multi-start type algorithm controlled by means of a simulated annealing criterion. In particular, the proposed method employs, as local search engine, a derivative free procedure. Under reasonable assumptions, we prove that this local algorithm is attracted by global minimum points. Additionally, we show that the simulated annealing strategy is able to produce a suitable starting point in a finite number of steps with probability one.


Computational Optimization and Applications | 2010

Penalty Function

Gianni Di Pillo; Giampaolo Liuzzi; Stefano Lucidi; Laura Palagi

In this paper we propose a primal-dual algorithm for the solution of general nonlinear programming problems. The core of the method is a local algorithm which relies on a truncated procedure for the computation of a search direction, and is thus suitable for large scale problems. The truncated direction produces a sequence of points which locally converges to a KKT pair with superlinear convergence rate.The local algorithm is globalized by means of a suitable merit function which is able to measure and to enforce progress of the iterates towards a KKT pair, without deteriorating the local efficiency. In particular, we adopt the exact augmented Lagrangian function introduced in Pillo and Lucidi (SIAM J. Optim. 12:376–406, 2001), which allows us to guarantee the boundedness of the sequence produced by the algorithm and which has strong connections with the above mentioned truncated direction.The resulting overall algorithm is globally and superlinearly convergent under mild assumptions.


Optimization Methods & Software | 2012

A magnetic resonance device designed via global optimization techniques

David Di Lorenzo; Giampaolo Liuzzi; Francesco Rinaldi; Fabio Schoen; Marco Sciandrone

This paper considers a portfolio selection problem in which portfolios with minimum number of active assets are sought. This problem is motivated by the need of inducing sparsity on the selected portfolio to reduce transaction costs, complexity of portfolio management, and instability of the solution. The resulting problem is a difficult combinatorial problem. We propose an approach based on the definition of an equivalent smooth concave problem. In this way, we move the difficulty of the original problem to that of solving a concave global minimization problem. We present as global optimization algorithm a specific version of the monotonic basin hopping method which employs, as local minimizer, an efficient version of the Frank–Wolfe method. We test our method on various data sets (of small, medium, and large dimensions) involving real-world capital market from major stock markets. The obtained results show the effectiveness of the presented methodology in terms of global optimization. Furthermore, also the out-of-sample performances of the selected portfolios, as measured by Sharpe ratio, appear satisfactory.

Collaboration


Dive into the Giampaolo Liuzzi's collaboration.

Top Co-Authors

Avatar

Stefano Lucidi

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Veronica Piccialli

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrea Serani

National Research Council

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matteo Diez

National Research Council

View shared research outputs
Top Co-Authors

Avatar

Giovanni Fasano

Ca' Foscari University of Venice

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Laura Palagi

Sapienza University of Rome

View shared research outputs
Researchain Logo
Decentralizing Knowledge