Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Laura Palagi is active.

Publication


Featured researches published by Laura Palagi.


Transportation Science | 1999

A Mathematical Programming Approach for the Solution of the Railway Yield Management Problem

A. Ciancimino; G. Inzerillo; Stefano Lucidi; Laura Palagi

Railway passenger transportation plays a fundamental role in Europe, particularly in view of the growing number of trains offering valuable services such as high speed travel, high comfort, etc. Hence, it is advantageous to submit seat inventories to a Yield Management system to get the maximum revenue. We consider a deterministic linear programming model and a probabilistic nonlinear programming model for the network problem with non-nested seat allocation. A first comparative analysis of the computational results obtained by the two models, both in terms of the overall expected revenue and in terms of CPU time, is carried out. Furthermore, we describe a new nonlinear algorithm for the solution of the probabilistic nonlinear programming model that exploits the structure of the optimization problem. The numerical results obtained on a set of real data show that, for this class of problems, this algorithm is more efficient than other standard algorithms for nonlinear programming problems.


Siam Journal on Optimization | 2002

A Truncated Newton Algorithm for Large Scale Box Constrained Optimization

Francisco Facchinei; Stefano Lucidi; Laura Palagi

A method for the solution of minimization problems with simple bounds is presented. Global convergence of a general scheme requiring the approximate solution of a single linear system at each iteration is proved and a superlinear convergence rate is established without requiring the strict complementarity assumption. The algorithm proposed is based on a simple, smooth unconstrained reformulation of the bound constrained problem and may produce a sequence of points that are not feasible. Numerical results and comparison with existing codes are reported.


Optimization Methods & Software | 2005

On the convergence of a modified version of SVM light algorithm

Laura Palagi; Marco Sciandrone

In this work, we consider the convex quadratic programming problem arising in support vector machine (SVM), which is a technique designed to solve a variety of learning and pattern recognition problems. Since the Hessian matrix is dense and real applications lead to large-scale problems, several decomposition methods have been proposed, which split the original problem into a sequence of smaller subproblems. SVM light algorithm is a commonly used decomposition method for SVM, and its convergence has been proved only recently under a suitable block-wise convexity assumption on the objective function. In SVM light algorithm, the size q of the working set, i.e. the dimension of the subproblem, can be any even number. In the present paper, we propose a decomposition method on the basis of a proximal point modification of the subproblem and the basis of a working set selection rule that includes, as a particular case, the one used by the SVM light algorithm. We establish the asymptotic convergence of the method, for any size q ≥ 2 of the working set, and without requiring any further block-wise convexity assumption on the objective function. Furthermore, we show that the algorithm satisfies in a finite number of iterations a stopping criterion based on the violation of the optimality conditions.


Computational Optimization and Applications | 2007

A convergent decomposition algorithm for support vector machines

Stefano Lucidi; Laura Palagi; Arnaldo Risi; Marco Sciandrone

Abstract In this work we consider nonlinear minimization problems with a single linear equality constraint and box constraints. In particular we are interested in solving problems where the number of variables is so huge that traditional optimization methods cannot be directly applied. Many interesting real world problems lead to the solution of large scale constrained problems with this structure. For example, the special subclass of problems with convex quadratic objective function plays a fundamental role in the training of Support Vector Machine, which is a technique for machine learning problems. For this particular subclass of convex quadratic problem, some convergent decomposition methods, based on the solution of a sequence of smaller subproblems, have been proposed. In this paper we define a new globally convergent decomposition algorithm that differs from the previous methods in the rule for the choice of the subproblem variables and in the presence of a proximal point modification in the objective function of the subproblems. In particular, the new rule for sequentially selecting the subproblems appears to be suited to tackle large scale problems, while the introduction of the proximal point term allows us to ensure the global convergence of the algorithm for the general case of nonconvex objective function. Furthermore, we report some preliminary numerical results on support vector classification problems with up to 100 thousands variables.


IEEE Transactions on Neural Networks | 2009

A Convergent Hybrid Decomposition Algorithm Model for SVM Training

Stefano Lucidi; Laura Palagi; Arnaldo Risi; Marco Sciandrone

Training of support vector machines (SVMs) requires to solve a linearly constrained convex quadratic problem. In real applications, the number of training data may be very huge and the Hessian matrix cannot be stored. In order to take into account this issue, a common strategy consists in using decomposition algorithms which at each iteration operate only on a small subset of variables, usually referred to as the working set. Training time can be significantly reduced by using a caching technique that allocates some memory space to store the columns of the Hessian matrix corresponding to the variables recently updated. The convergence properties of a decomposition method can be guaranteed by means of a suitable selection of the working set and this can limit the possibility of exploiting the information stored in the cache. We propose a general hybrid algorithm model which combines the capability of producing a globally convergent sequence of points with a flexible use of the information in the cache. As an example of a specific realization of the general hybrid model, we describe an algorithm based on a particular strategy for exploiting the information deriving from a caching technique. We report the results of computational experiments performed by simple implementations of this algorithm. The numerical results point out the potentiality of the approach.


Siam Journal on Optimization | 2013

An Exact Algorithm for Nonconvex Quadratic Integer Minimization Using Ellipsoidal Relaxations

Christoph Buchheim; M. De Santis; Laura Palagi; Mauro Piacentini

We propose a branch-and-bound algorithm for minimizing a not necessarily convex quadratic function over integer variables. The algorithm is based on lower bounds computed as continuous minima of the objective function over appropriate ellipsoids. In the nonconvex case, we use ellipsoids enclosing the feasible region of the problem. In spite of the nonconvexity, these minima can be computed quickly; the corresponding optimization problems are equivalent to trust-region subproblems. We present several ideas that allow us to accelerate the solution of the continuous relaxation within a branch-and-bound scheme and examine the performance of the overall algorithm by computational experiments. Good computational performance is shown especially for ternary instances.


Computational Optimization and Applications | 2010

A truncated Newton method in an augmented Lagrangian framework for nonlinear programming

Gianni Di Pillo; Giampaolo Liuzzi; Stefano Lucidi; Laura Palagi

In this paper we propose a primal-dual algorithm for the solution of general nonlinear programming problems. The core of the method is a local algorithm which relies on a truncated procedure for the computation of a search direction, and is thus suitable for large scale problems. The truncated direction produces a sequence of points which locally converges to a KKT pair with superlinear convergence rate.The local algorithm is globalized by means of a suitable merit function which is able to measure and to enforce progress of the iterates towards a KKT pair, without deteriorating the local efficiency. In particular, we adopt the exact augmented Lagrangian function introduced in Pillo and Lucidi (SIAM J. Optim. 12:376–406, 2001), which allows us to guarantee the boundedness of the sequence produced by the algorithm and which has strong connections with the above mentioned truncated direction.The resulting overall algorithm is globally and superlinearly convergent under mild assumptions.


Mathematics of Operations Research | 2005

Convergence to Second-Order Stationary Points of a Primal-Dual Algorithm Model for Nonlinear Programming

Gianni Di Pillo; Stefano Lucidi; Laura Palagi

We define a primal-dual algorithm model (second-order Lagrangian algorithm, SOLA) for inequality constrained optimization problems that generates a sequence converging to points satisfying the second-order necessary conditions for optimality. This property can be enforced by combining the equivalence between the original constrained problem and the unconstrained minimization of an exact augmented Lagrangian function and the use of a curvilinear line search technique that exploits information on the nonconvexity of the augmented Lagrangian function.


Journal of Global Optimization | 2005

Quartic Formulation of Standard Quadratic Optimization Problems

Immanuel M. Bomze; Laura Palagi

A standard quadratic optimization problem (StQP) consists of finding the largest or smallest value of a (possibly indefinite) quadratic form over the standard simplex which is the intersection of a hyperplane with the positive orthant. This NP-hard problem has several immediate real-world applications like the Maximum-Clique Problem, and it also occurs in a natural way as a subproblem in quadratic programming with linear constraints. To get rid of the (sign) constraints, we propose a quartic reformulation of StQPs, which is a special case (degree four) of a homogeneous program over the unit sphere. It turns out that while KKT points are not exactly corresponding to each other, there is a one-to-one correspondence between feasible points of the StQP satisfying second-order necessary optimality conditions, to the counterparts in the quartic homogeneous formulation. We supplement this study by showing how exact penalty approaches can be used for finding local solutions satisfying second-order necessary optimality conditions to the quartic problem: we show that the level sets of the penalty function are bounded for a finite value of the penalty parameter which can be fixed in advance, thus establishing exact equivalence of the constrained quartic problem with the unconstrained penalized version.


Optimization | 1993

An exact penalty-lagrangian approach for a class of constrained optimization problems with bounded variables

G. Di Pillo; Stefano Lucidi; Laura Palagi

In this paper we consider a class of equality constrained optimization problems with box constraints on a part of its variables The study of non linear programming problems with such a structure is justified by the existence of practical problems in many fields as, for example, optimal control or economic modelling. Typically, the dimension of these problems are very large and, in such situation, the classical methods to solve NLP problems may have serious drawbacks. In this paper we define a new continuosly differentiable exact penalty function which transforms the original constrained problem into an unconstrained one and it is well suited to tackle large scale problems. In particular this new function is based on a mixed exact penalty-Lagrangian approach and this allows us to take full advantage of the particular structure of the considered class of problems. We show that there is a one to one correspondence between Kuhn-Tucker point (local and global minimum points) of the constrained problem and stat...

Collaboration


Dive into the Laura Palagi's collaboration.

Top Co-Authors

Avatar

Stefano Lucidi

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar

Mauro Piacentini

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar

Gianni Di Pillo

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar

Luigi Grippo

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar

Veronica Piccialli

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Giampaolo Liuzzi

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar

Arnaldo Risi

National Research Council

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge