Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jorge Nocedal is active.

Publication


Featured researches published by Jorge Nocedal.


Mathematical Programming | 1989

On the limited memory BFGS method for large scale optimization

Dong C. Liu; Jorge Nocedal

We study the numerical performance of a limited memory quasi-Newton method for large scale optimization, which we call the L-BFGS method. We compare its performance with that of the method developed by Buckley and LeNir (1985), which combines cycles of BFGS steps and conjugate direction steps. Our numerical tests indicate that the L-BFGS method is faster than the method of Buckley and LeNir, and is better able to use additional storage to accelerate convergence. We show that the L-BFGS method can be greatly accelerated by means of a simple scaling. We then compare the L-BFGS method with the partitioned quasi-Newton method of Griewank and Toint (1982a). The results show that, for some problems, the partitioned quasi-Newton method is clearly superior to the L-BFGS method. However we find that for other problems the L-BFGS method is very competitive due to its low iteration cost. We also study the convergence properties of the L-BFGS method, and prove global convergence on uniformly convex problems.


SIAM Journal on Scientific Computing | 1995

A limited memory algorithm for bound constrained optimization

Richard H. Byrd; Peihuang Lu; Jorge Nocedal; Ciyou Zhu

An algorithm for solving large nonlinear optimization problems with simple bounds is described. It is based on the gradient projection method and uses a limited memory BFGS matrix to approximate the Hessian of the objective function. It is shown how to take advantage of the form of the limited memory approximation to implement the algorithm efficiently. The results of numerical tests on a set of large problems are reported.


Mathematics of Computation | 1980

Updating Quasi-Newton Matrices With Limited Storage

Jorge Nocedal

We study how to use the BFGS quasi-Newton matrices to precondition minimization methods for problems where the storage is critical. We give an update formula which generates matrices using information from the last m iterations, where m is any number supplied by the user. The quasi-Newton matrix is updated at every iteration by dropping the oldest information and replacing it by the newest informa- tion. It is shown that the matrices generated have some desirable properties. The resulting algorithms are tested numerically and compared with several well- known methods. 1. Introduction. For the problem of minimizing an unconstrained function / of n variables, quasi-Newton methods are widely employed (4). They construct a se- quence of matrices which in some way approximate the hessian of /(or its inverse). These matrices are symmetric; therefore, it is necessary to have n(n + l)/2 storage locations for each one. For large dimensional problems it will not be possible to re- tain the matrices in the high speed storage of a computer, and one has to resort to other kinds of algorithms. For example, one could use the methods (Toint (15), Shanno (12)) which preserve the sparsity structure of the hessian, or conjugate gradient methods (CG) which only have to store 3 or 4 vectors. Recently, some CG algorithms have been developed which use a variable amount of storage and which do not require knowledge about the sparsity structure of the problem (2), (7), (8). A disadvantage of these methods is that after a certain number of iterations the quasi-Newton matrix is discarded, and the algorithm is restarted using an initial matrix (usually a diagonal matrix). We describe an algorithm which uses a limited amount of storage and where the quasi-Newton matrix is updated continuously. At every step the oldest information contained in the matrix is discarded and replaced by new one. In this way we hope to have a more up to date model of our function. We will concentrate on the BFGS method since it is considered to be the most efficient. We believe that similar algo- rithms cannot be developed for the other members of the Broyden 0-class (1). Let / be the function to be nnnimized, g its gradient and h its hessian. We define


ACM Transactions on Mathematical Software | 1997

Algorithm 778: L-BFGS-B: Fortran subroutines for large-scale bound-constrained optimization

Ciyou Zhu; Richard H. Byrd; Peihuang Lu; Jorge Nocedal

L-BFGS-B is a limited-memory algorithm for solving large nonlinear optimization problems subject to simple bounds on the variables. It is intended for problems in which information on the Hessian matrix is difficult to obtain, or for large dense problems. L-BFGS-B can also be used for unconstrained problems and in this case performs similarly to its predessor, algorithm L-BFGS (Harwell routine VA15). The algorithm is implemented in Fortran 77.


Siam Journal on Optimization | 1999

An Interior Point Algorithm for Large-Scale Nonlinear Programming

Richard H. Byrd; Mary E. Hribar; Jorge Nocedal

The design and implementation of a new algorithm for solving large nonlinear programming problems is described. It follows a barrier approach that employs sequential quadratic programming and trust regions to solve the subproblems occurring in the iteration. Both primal and primal-dual versions of the algorithm are developed, and their performance is illustrated in a set of numerical tests.


Mathematical Programming | 2000

A trust region method based on interior point techniques for nonlinear programming

Richard H. Byrd; Jean Charles Gilbert; Jorge Nocedal

Abstract.An algorithm for minimizing a nonlinear function subject to nonlinear inequality constraints is described. It applies sequential quadratic programming techniques to a sequence of barrier problems, and uses trust regions to ensure the robustness of the iteration and to allow the direct use of second order derivatives. This framework permits primal and primal-dual steps, but the paper focuses on the primal version of the new algorithm. An analysis of the convergence properties of this method is presented.


Siam Journal on Optimization | 1992

Global Convergence Properties of Conjugate Gradient Methods for Optimization

Jean Charles Gilbert; Jorge Nocedal

This paper explores the convergence of nonlinear conjugate gradient methods without restarts, and with practical line searches. The analysis covers two classes of methods that are globally convergent on smooth, nonconvex functions. Some properties of the Fletcher–Reeves method play an important role in the first family, whereas the second family shares an important property with the Polak–Ribiere method. Numerical experiments are presented.


Archive | 2006

Knitro: An Integrated Package for Nonlinear Optimization

Richard H. Byrd; Jorge Nocedal; Richard A. Waltz

This paper describes Knitro 5.0, a C-package for nonlinear optimization that combines complementary approaches to nonlinear optimization to achieve robust performance over a wide range of application requirements. The package is designed for solving large-scale, smooth nonlinear programming problems, and it is also effective for the following special cases: unconstrained optimization, nonlinear systems of equations, least squares, and linear and quadratic programming. Various algorithmic options are available, including two interior methods and an active-set method. The package provides crossover techniques between algorithmic options as well as automatic selection of options and settings.


Mathematical Programming | 1994

Representations of quasi-Newton matrices and their use in limited memory methods

Richard H. Byrd; Jorge Nocedal; Robert B. Schnabel

We derive compact representations of BFGS and symmetric rank-one matrices for optimization. These representations allow us to efficiently implement limited memory methods for large constrained optimization problems. In particular, we discuss how to compute projections of limited memory matrices onto subspaces. We also present a compact representation of the matrices generated by Broydens update for solving systems of nonlinear equations.


Mathematical Programming | 2006

An interior algorithm for nonlinear optimization that combines line search and trust region steps

Richard A. Waltz; José Luis Morales; Jorge Nocedal; Dominique Orban

Abstract.An interior-point method for nonlinear programming is presented. It enjoys the flexibility of switching between a line search method that computes steps by factoring the primal-dual equations and a trust region method that uses a conjugate gradient iteration. Steps computed by direct factorization are always tried first, but if they are deemed ineffective, a trust region iteration that guarantees progress toward stationarity is invoked. To demonstrate its effectiveness, the algorithm is implemented in the Knitro [6,28] software package and is extensively tested on a wide selection of test problems.

Collaboration


Dive into the Jorge Nocedal's collaboration.

Top Co-Authors

Avatar

Richard H. Byrd

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar

Stephen J. Wright

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

José Luis Morales

Instituto Tecnológico Autónomo de México

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ciyou Zhu

Northwestern University

View shared research outputs
Top Co-Authors

Avatar

Nicholas I. M. Gould

Rutherford Appleton Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael L. Overton

Courant Institute of Mathematical Sciences

View shared research outputs
Top Co-Authors

Avatar

Yuchen Wu

Northwestern University

View shared research outputs
Researchain Logo
Decentralizing Knowledge