Manlio Gaudioso
University of Calabria
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Manlio Gaudioso.
Siam Journal on Optimization | 2003
Antonio Fuduli; Manlio Gaudioso; Giovanni Giallombardo
We describe an extension of the classical cutting plane algorithm to tackle the unconstrained minimization of a nonconvex, not necessarily differentiable function of several variables. The method is based on the construction of both a lower and an upper polyhedral approximation to the objective function and is related to the use of the concept of proximal trajectory. Convergence to a stationary point is proved for weakly semismooth functions.
Optimization Methods & Software | 2004
Antonio Fuduli; Manlio Gaudioso; Giovanni Giallombardo
We introduce an algorithm to minimize a function of several variables with no convexity nor smoothness assumptions. The main peculiarity of our approach is the use of an objective function model which is the difference of two piecewise affine convex functions. Bundling and trust region concepts are embedded into the algorithm. Convergence of the algorithm to a stationary point is proved and some numerical results are reported.
Mathematics of Operations Research | 2006
Manlio Gaudioso; Giovanni Giallombardo; Giovanna Miglionico
We introduce a new approach to minimizing a function defined as the pointwise maximum over finitely many convex real functions (next referred to as the component functions), with the aim of working on the basis of incomplete knowledge of the objective function. A descent algorithm is proposed, which need not require at the current point the evaluation of the actual value of the objective function, namely, of all the component functions, thus extending to min-max problems the philosophy of the incremental approaches, widely adopted in the nonlinear least squares literature. Given the nonsmooth nature of the problem, we resort to the well-established machinery of bundle methods. We provide global convergence analysis of our method, and in addition, we study a subgradient aggregation scheme aimed at simplifying the problem of finding a tentative step. This paper is completed by the numerical results obtained on a set of standard test problems.
Optimization Methods & Software | 2005
Annabella Astorino; Manlio Gaudioso
We state the problem of the optimal separation via an ellipsoid in ℝ n of a discrete set of points from another discrete set of points. Our formulation requires the minimization of a convex nonsmooth (piecewise affine) function under the constraint that the matrix of the decision variables is positive definite. We describe a heuristic algorithm of the local search type embedding some ideas coming from nonsmooth optimization. Finally, we present the numerical results obtained by running our method on some standard test problems drawn from the binary classification literature.
Siam Journal on Optimization | 2011
Annabella Astorino; Antonio Frangioni; Manlio Gaudioso; Enrico Gorgone
We present a bundle method for convex nondifferentiable minimization where the model is a piecewise-quadratic convex approximation of the objective function. Unlike standard bundle approaches, the model only needs to support the objective function from below at a properly chosen (small) subset of points, as opposed to everywhere. We provide the convergence analysis for the algorithm, with a general form of master problem which combines features of trust region stabilization and proximal stabilization, taking care of all the important practical aspects such as proper handling of the proximity parameters and the bundle of information. Numerical results are also reported.
Computers & Operations Research | 2017
Francesco Carrabs; Carmine Cerrone; Raffaele Cerulli; Manlio Gaudioso
This paper addresses a variant of the Euclidean traveling salesman problem in which the traveler visits a node if it passes through the neighborhood set of that node. The problem is known as the close-enough traveling salesman problem. We introduce a new effective discretization scheme that allows us to compute both a lower and an upper bound for the optimal solution. Moreover, we apply a graph reduction algorithm that significantly reduces the problem size and speeds up computation of the bounds. We evaluate the effectiveness and the performance of our approach on several benchmark instances. The computational results show that our algorithm is faster than the other algorithms available in the literature and that the bounds it provides are almost always more accurate. HighlightsWe introduce a novel discretization scheme for the close enough TSP problem.By reducing the discretization error, the new scheme allows to compute tighter upper and lower bounds for the problem.We apply an enhanced convex hull strategy to save the number of discretization points to be used.The discretization strategy allows us to assign an adaptively variable number of discretization points to each neighborhood.Numerical comparisons with some algorithms proposed in the literature are presented.
Numerische Mathematik | 2009
Manlio Gaudioso; Enrico Gorgone; Maria Flavia Monaco
We present a bundle type method for minimizing nonconvex nondifferentiable functions of several variables. The algorithm is based on the construction of both a lower and an upper polyhedral approximation of the objective function. In particular, at each iteration, a search direction is computed by solving a quadratic program aiming at maximizing the difference between the lower and the upper model. A proximal approach is used to guarantee convergence to a stationary point under the hypothesis of weak semismoothness.
Computational Management Science | 2009
Annabella Astorino; Manlio Gaudioso
We consider a special case of the optimal separation, via a sphere, of two discrete point sets in a finite dimensional Euclidean space. In fact we assume that the center of the sphere is fixed. In this case the problem reduces to the minimization of a convex and nonsmooth function of just one variable, which can be solved by means of an “ad hoc” method in O(p log p) time, where p is the dataset size. The approach is suitable for use in connection with kernel transformations of the type adopted in the support vector machine (SVM) approach. Despite of its simplicity the method has provided interesting results on several standard test problems drawn from the binary classification literature.
Computational Optimization and Applications | 2009
Manlio Gaudioso; Giovanni Giallombardo; Giovanna Miglionico
Abstract The Lagrangian dual of an integer program can be formulated as a min-max problem where the objective function is convex, piecewise affine and, hence, nonsmooth. It is usually tackled by means of subgradient algorithms, or multiplier adjustment techniques, or even more sophisticated nonsmooth optimization methods such as bundle-type algorithms. Recently a new approach to solving unconstrained convex finite min-max problems has been proposed, which has the nice property of working almost independently of the exact evaluation of the objective function at every iterate-point. In the paper we adapt the method, which is of the descent type, to the solution of the Lagrangian dual. Since the Lagrangian relaxation need not be solved exactly, the approach appears suitable whenever the Lagrangian dual must be solved many times (e.g., to improve the bound at each node of a branch-and-bound tree), and effective heuristic algorithms at low computational cost are available for solving the Lagrangian relaxation. We present an application to the Generalized Assignment Problem (GAP) and discuss the results of our numerical experimentation on a set of standard test problems.
Optimization Letters | 2016
Carmine Cerrone; Raffaele Cerulli; Manlio Gaudioso
The genetic algorithm (GA) is a quite efficient paradigm to solve several optimization problems. It is substantially a search technique that uses an ever-changing neighborhood structure related to a population which evolves according to a number of genetic operators. In the GA framework many techniques have been devised to escape from a local optimum when the algorithm fails in locating the global one. To this aim we present a variant of the GA which we call OMEGA (One Multi Ethnic Genetic Approach). The main difference is that, starting from an initial population,