Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Luigi Grippo is active.

Publication


Featured researches published by Luigi Grippo.


Operations Research Letters | 2000

On the convergence of the block nonlinear Gauss-Seidel method under convex constraints

Luigi Grippo; Marco Sciandrone

We give new convergence results for the block Gauss-Seidel method for problems where the feasible set is the Cartesian product of m closed convex sets, under the assumption that the sequence generated by the method has limit points. We show that the method is globally convergent for m=2 and that for m>2 convergence can be established both when the objective function f is componentwise strictly quasiconvex with respect to m-2 components and when f is pseudoconvex. Finally, we consider a proximal point modification of the method and we state convergence results without any convexity assumption on the objective function.


Journal of Optimization Theory and Applications | 1989

A truncated Newton method with nonmonotone line search for unconstrained optimization

Luigi Grippo; Francesco Lampariello; S. Lucidi

In this paper, an unconstrained minimization algorithm is defined in which a nonmonotone line search technique is employed in association with a truncated Newton algorithm. Numerical results obtained for a set of standard test problems are reported which indicate that the proposed algorithm is highly effective in the solution of illconditioned as well as of large dimensional problems.


Mathematical Programming | 1997

A globally convergent version of the Polak-Ribière conjugate gradient method

Luigi Grippo; Stefano Lucidi

In this paper we propose a new line search algorithm that ensures global convergence of the Polak-Ribière conjugate gradient method for the unconstrained minimization of nonconvex differentiable functions. In particular, we show that with this line search every limit point produced by the Polak-Ribière iteration is a stationary point of the objective function. Moreover, we define adaptive rules for the choice of the parameters in a way that the first stationary point along a search direction can be eventually accepted when the algorithm is converging to a minimum point with positive definite Hessian matrix. Under strong convexity assumptions, the known global convergence results can be reobtained as a special case. From a computational point of view, we may expect that an algorithm incorporating the step-size acceptance rules proposed here will retain the same good features of the Polak-Ribière method, while avoiding pathological situations.


Siam Journal on Control and Optimization | 1989

Exact penalty functions in constrained optimization

G. Di Pillo; Luigi Grippo

In this paper formal definitions of exactness for penalty functions are introduced and sufficient conditions for a penalty function to be exact according to these definitions are stated, thus providing a unified framework for the study of both nondifferentiable and continuously differentiable penalty functions. In this framework the best-known classes of exact penalty functions are analyzed, and new results are established concerning the correspondence between the solutions of the constrained problem and the unconstrained minimizers of the penalty functions.


Numerische Mathematik | 1991

A class of nonmonotone stabilization methods in unconstrained optimization

Luigi Grippo; Francesco Lampariello; Stefano Lucidi

SummaryThis paper deals with the solution of smooth unconstrained minimization problems by Newton-type methods whose global convergence is enforced by means of a nonmonotone stabilization strategy. In particular, a stabilization scheme is analyzed, which includes different kinds of relaxation of the descent requirements. An extensive numerical experimentation is reported.


Siam Journal on Control and Optimization | 1979

A New Class of Augmented Lagrangians in Nonlinear Programming

G. Di Pillo; Luigi Grippo

In this paper a new class of augmented Lagrangians is introduced, for solving equality constrained problems via unconstrained minimization techniques. It is proved that a solution of the constrained problem and the corresponding values of the Lagrange multipliers can be found by performing a single unconstrained minimization of the augmented Lagrangian. In particular, in the linear quadratic case, the solution is obtained by minimizing a quadratic function. Numerical examples are reported.


Computational Optimization and Applications | 2002

Nonmonotone Globalization Techniques for the Barzilai-Borwein Gradient Method

Luigi Grippo; Marco Sciandrone

In this paper we propose new globalization strategies for the Barzilai and Borwein gradient method, based on suitable relaxations of the monotonicity requirements. In particular, we define a class of algorithms that combine nonmonotone watchdog techniques with nonmonotone linesearch rules and we prove the global convergence of these schemes. Then we perform an extensive computational study, which shows the effectiveness of the proposed approach in the solution of large dimensional unconstrained optimization problems.


Mathematical Programming | 1993

A smooth method for the finite minimax problem

G. Di Pillo; Luigi Grippo; Stefano Lucidi

We consider unconstrained minimax problems where the objective function is the maximum of a finite number of smooth functions. We prove that, under usual assumptions, it is possible to construct a continuously differentiable function, whose minimizers yield the minimizers of the max function and the corresponding minimum values. On this basis, we can define implementable algorithms for the solution of the minimax problem, which are globally convergent at a superlinear convergence rate. Preliminary numerical results are reported.


Siam Journal on Control and Optimization | 1985

A Continuously Differentiable Exact Penalty Function for Nonlinear Programming Problems with Inequality Constraints

G. Di Pillo; Luigi Grippo

In this paper it is shown that, given a nonlinear programming problem with inequality constraints, it is possible to construct a continuously differentiable exact penalty function whose global or local unconstrained minimizers correspond to global or local solutions of the constrained problem.


Optimization Methods & Software | 1994

A class of unconstrained minimization methods for neural network training

Luigi Grippo

In this paper the problem of neural network training is formulated as the unconstrained minimization of a sum of differentiate error terms on the output space. For problems of this form we consider solution algorithms of the backpropagation-type, where the gradient evaluation is split into different steps, and we state sufficient convergence conditions that exploit the special structure of the objective function. Then we define a globally convergent algorithm that uses the knowledge of the overall error function for the computation of the learning rates. Potential advantages and possible shortcomings of this approach, in comparison with alternative approaches are discussed.

Collaboration


Dive into the Luigi Grippo's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

G. Di Pillo

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar

Stefano Lucidi

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar

Laura Palagi

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Veronica Piccialli

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar

Mauro Piacentini

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar

Gianni Di Pillo

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

S. Lucidi

National Research Council

View shared research outputs
Researchain Logo
Decentralizing Knowledge