Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Massimo Roma is active.

Publication


Featured researches published by Massimo Roma.


Siam Journal on Optimization | 1999

Solving the Trust-Region Subproblem using the Lanczos Method

Nicholas I. M. Gould; Stefano Lucidi; Massimo Roma; Philippe L. Toint

The approximate minimization of a quadratic function within an ellipsoidal trust region is an important subproblem for many nonlinear programming methods. When the number of variables is large, the most widely used strategy is to trace the path of conjugate gradient iterates either to convergence or until it reaches the trust-region boundary. In this paper, we investigate ways of continuing the process once the boundary has been encountered. The key is to observe that the trust-region problem within the currently generated Krylov subspace has a very special structure which enables it to be solved very efficiently. We compare the new strategy with existing methods. The resulting software package is available as HSL_VF05 within the Harwell Subroutine Library.


Archive | 2006

Large-Scale Nonlinear Optimization

G. Di Pillo; Massimo Roma

Fast Linear Algebra for Multiarc Trajectory Optimization (Nicolas Berend, J. Frederic Bonnans, Julien Laurent-Varin, MounirHaddou, Christophe Talbot).- Lagrange Multipliers with Optimal Sensitivity Properties in Constrained Optimization (Dimitri P. Bertsekas).- n O(n^2) Algorithm for Isotonic Regression (Oleg Burdakov, Oleg Sysoev, Anders Grimvall, Mohamed Hussian).- Knitro: An Integrated Package for Nonlinear Optimization (Richard H. Byrd, Jorge Nocedal, Richard A. Waltz).- On implicit-factorization constraint preconditioners (H. Sue Dollar, Nicholas I. M. Gould, Andrew J. Wathen).- Optimal algorithms for large sparse quadratic programming problems with uniformly bounded spectrum (Zdenek Dostal).- Numerical methods for separating two polyhedra (Yury G. Evtushenko, Alexander I. Golikov, Saed Ketabchi).- Exact penalty functions for generalized Nash problems (Francisco Facchinei, Jong-Shi Pang).- Parametric Sensitivity Analysis for Optimal Boundary Control of a 3D Reaction-Diffusion System (Roland Griesse, Stefan Volkwein).- Projected Hessians for Preconditioning in One-Step One-Shot Design Optimization (Andreas Griewank).- Conditions and parametric representations of approximate minimal elements of a set through scalarization (Cesar Gutierrez, Bienvenido Jimenez, Vicente Novo).- Efficient methods for large-scale unconstrained optimization (Ladislav Luksan, Jan Vlcek).- A variational approach for minimum cost flow problems (Giandomenico Mastroeni).- Multi-Objective Optimisation of Expensive Objective Functions with Variable Fidelity Models (Daniele Peri, Antonio Pinto, Emilio F. Campana).-Towards the Numerical Solution of a Large Scale PDAE Constrained Optimization Problem Arising in Molten Carbonate Fuel Cell Modeling (Hans Josef Pesch, Kati Sternberg, Kurt Chudej).- The NEWUOA software for unconstrained optimization without derivatives (M.J.D. Powell).


Siam Journal on Optimization | 1998

Curvilinear Stabilization Techniques for Truncated Newton Methods in Large Scale Unconstrained Optimization

Stefano Lucidi; Francesco Rochetich; Massimo Roma

The aim of this paper is to define a new class of minimization algorithms for solving large scale unconstrained problems. In particular we describe a stabilization framework, based on a curvilinear linesearch, which uses a combination of a Newton-type direction and a negative curvature direction. The motivation for using negative curvature direction is that of taking into account local nonconvexity of the objective function. On the basis of this framework, we propose an algorithm which uses the Lanczos method for determining at each iteration both a Newton-type direction and an effective negative curvature direction. The results of extensive numerical testing are reported together with a comparison with the LANCELOT package. These results show that the algorithm is very competitive, which seems to indicate that the proposed approach is promising.


Optimization Methods & Software | 2000

Exploiting negative curvature directions in linesearch methods for unconstrained optimization

Nicholas I. M. Gould; Stefano Lucidi; Massimo Roma; Ph. L. Toint

In this paper we propose efficient new linesearch algorithms for solving large scale unconstrained optimization problems which exploit any local nonconvexity of the objective function. Current algorithms in this class typically compute a pair of search directions at every iteration: a Newton-type direction, which ensures both global and fast asymptotic convergence, and a negative curvature direction, which enables the iterates to escape from the region of local non-convexity. A new point is generated by performing a search along a line or a curve obtained by combining these two directions. However, in almost all if these algorithms, the relative scaling of the directions is not taken into account. We propose a new algorithm which accounts for the relative scaling of the two directions. To do this, only the most promising of the two directions is selected at any given iteration, and a linesearch is performed along the chosen direction. The appropriate direction is selected by estimating the rate of decrease of the quadratic model of the objective function in both candidate directions. We prove global convergence to second-order critical points for the new algorithm, and report some preliminary numerical results.


Computational Optimization and Applications | 1996

Nonmonotone curvilinear line search methods for unconstrained optimization

Michael C. Ferris; Stefano Lucidi; Massimo Roma

We present a new algorithmic framework for solving unconstrained minimization problems that incorporates a curvilinear linesearch. The search direction used in our framework is a combination of an approximate Newton direction and a direction of negative curvature. Global convergence to a stationary point where the Hessian matrix is positive semidefinite is exhibited for this class of algorithms by means of a nonmonotone stabilization strategy. An implementation using the Bunch-Parlett decomposition is shown to outperform several other techniques on a large class of test problems.


Computational Optimization and Applications | 1997

Numerical Experiences with New Truncated Newton Methodsin Large Scale Unconstrained Optimization

Stefano Lucidi; Massimo Roma

Recently, in [12] a very general class oftruncated Newton methods has been proposed for solving large scale unconstrained optimization problems. In this work we present the results of an extensive numericalexperience obtained by different algorithms which belong to the preceding class. This numerical study, besides investigating which arethe best algorithmic choices of the proposed approach, clarifies some significant points which underlies every truncated Newton based algorithm.


Computational Optimization and Applications | 2013

Preconditioning Newton---Krylov methods in nonconvex large scale optimization

Giovanni Fasano; Massimo Roma

We consider an iterative preconditioning technique for non-convex large scale optimization. First, we refer to the solution of large scale indefinite linear systems by using a Krylov subspace method, and describe the iterative construction of a preconditioner which does not involve matrices products or matrices storage. The set of directions generated by the Krylov subspace method is used, as by product, to provide an approximate inverse preconditioner. Then, we experience our preconditioner within Truncated Newton schemes for large scale unconstrained optimization, where we generalize the truncation rule by Nash–Sofer (Oper. Res. Lett. 9:219–221, 1990) to the indefinite case, too. We use a Krylov subspace method to both approximately solve the Newton equation and to construct the preconditioner to be used at the current outer iteration. An extensive numerical experience shows that the proposed preconditioning strategy, compared with the unpreconditioned strategy and PREQN (Morales and Nocedal in SIAM J. Optim. 10:1079–1096, 2000), may lead to a reduction of the overall inner iterations. Finally, we show that our proposal has some similarities with the Limited Memory Preconditioners (Gratton et al. in SIAM J. Optim. 21:912–935, 2011).


Optimization Methods & Software | 2005

Dynamic scaling based preconditioning for truncated Newton methods in large scale unconstrained optimization

Massimo Roma

This paper deals with the preconditioning of truncated Newton methods for the solution of large scale nonlinear unconstrained optimization problems. We focus on preconditioners which can be naturally embedded in the framework of truncated Newton methods, i.e. which can be built without storing the Hessian matrix of the function to be minimized, but only based upon information on the Hessian obtained by the product of the Hessian matrix times a vector. In particular we propose a diagonal preconditioning which enjoys this feature and which enables us to examine the effect of diagonal scaling on truncated Newton methods. In fact, this new preconditioner carries out a scaling strategy and it is based on the concept of equilibration of the data in linear systems of equations. An extensive numerical testing has been performed showing that the diagonal preconditioning strategy proposed is very effective. In fact, on most problems considered, the resulting diagonal preconditioned truncated Newton method performs better than both the unpreconditioned method and the one using an automatic preconditioner based on limited memory quasi-Newton updating (PREQN) recently proposed by Morales and Nocedal [Morales, J.L. and Nocedal, J., 2000, Automatic preconditioning by limited memory quasi-Newton updating. SIAM Journal on Optimization, 10, 1079–1096].


Computational Optimization and Applications | 2016

A novel class of approximate inverse preconditioners for large positive definite linear systems in optimization

Giovanni Fasano; Massimo Roma

We propose a class of preconditioners for large positive definite linear systems, arising in nonlinear optimization frameworks. These preconditioners can be computed as by-product of Krylov-subspace solvers. Preconditioners in our class are chosen by setting the values of some user-dependent parameters. We first provide some basic spectral properties which motivate a theoretical interest for the proposed class of preconditioners. Then, we report the results of a comparative numerical experience, among some preconditioners in our class, the unpreconditioned case and the preconditioner in Fasano and Roma (Comput Optim Appl 56:253–290, 2013). The experience was carried on first considering some relevant linear systems proposed in the literature. Then, we embedded our preconditioners within a linesearch-based Truncated Newton method, where sequences of linear systems (namely Newton’s equations), are required to be solved. We performed an extensive numerical testing over the entire medium-large scale convex unconstrained optimization test set of CUTEst collection (Gould et al. Comput Optim Appl 60:545–557, 2015), confirming the efficiency of our proposal and the improvement with respect to the preconditioner in Fasano and Roma (Comput Optim Appl 56:253–290, 2013).


Applied Mathematics and Computation | 2018

Preconditioned Nonlinear Conjugate Gradient methods based on a modified secant equation

Andrea Caliciotti; Giovanni Fasano; Massimo Roma

This paper includes a twofold result for the Nonlinear Conjugate Gradient (NCG) method, in large scale unconstrained optimization. First we consider a theoretical analysis, where preconditioning is embedded in a strong convergence framework of an NCG method from the literature. Mild conditions to be satisfied by the preconditioners are defined, in order to preserve NCG convergence. As a second task, we also detail the use of novel matrixfree preconditioners for NCG. Our proposals are based on quasiNewton updates, and either satisfy the secant equation or a secantlike condition at some of the previous iterates. We show that, in some sense, the preconditioners we propose also approximate the inverse of the Hessian matrix. In particular, the structures of our preconditioners depend on lowrank updates used, along with different choices of specific parameters. The lowrank updates are obtained as byproduct of NCG iterations. The results of an extended numerical experience using large scale CUTEst problems is reported, showing that our preconditioners can considerably improve the performance of NCG methods.

Collaboration


Dive into the Massimo Roma's collaboration.

Top Co-Authors

Avatar

Stefano Lucidi

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar

Giovanni Fasano

Ca' Foscari University of Venice

View shared research outputs
Top Co-Authors

Avatar

Nicholas I. M. Gould

Rutherford Appleton Laboratory

View shared research outputs
Top Co-Authors

Avatar

Laura Palagi

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Luca Paulon

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar

Massimo Maurici

University of Rome Tor Vergata

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gianni Di Pillo

Sapienza University of Rome

View shared research outputs
Researchain Logo
Decentralizing Knowledge