Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Richard H. Byrd is active.

Publication


Featured researches published by Richard H. Byrd.


SIAM Journal on Scientific Computing | 1995

A limited memory algorithm for bound constrained optimization

Richard H. Byrd; Peihuang Lu; Jorge Nocedal; Ciyou Zhu

An algorithm for solving large nonlinear optimization problems with simple bounds is described. It is based on the gradient projection method and uses a limited memory BFGS matrix to approximate the Hessian of the objective function. It is shown how to take advantage of the form of the limited memory approximation to implement the algorithm efficiently. The results of numerical tests on a set of large problems are reported.


ACM Transactions on Mathematical Software | 1997

Algorithm 778: L-BFGS-B: Fortran subroutines for large-scale bound-constrained optimization

Ciyou Zhu; Richard H. Byrd; Peihuang Lu; Jorge Nocedal

L-BFGS-B is a limited-memory algorithm for solving large nonlinear optimization problems subject to simple bounds on the variables. It is intended for problems in which information on the Hessian matrix is difficult to obtain, or for large dense problems. L-BFGS-B can also be used for unconstrained problems and in this case performs similarly to its predessor, algorithm L-BFGS (Harwell routine VA15). The algorithm is implemented in Fortran 77.


Siam Journal on Optimization | 1999

An Interior Point Algorithm for Large-Scale Nonlinear Programming

Richard H. Byrd; Mary E. Hribar; Jorge Nocedal

The design and implementation of a new algorithm for solving large nonlinear programming problems is described. It follows a barrier approach that employs sequential quadratic programming and trust regions to solve the subproblems occurring in the iteration. Both primal and primal-dual versions of the algorithm are developed, and their performance is illustrated in a set of numerical tests.


Mathematical Programming | 2000

A trust region method based on interior point techniques for nonlinear programming

Richard H. Byrd; Jean Charles Gilbert; Jorge Nocedal

Abstract.An algorithm for minimizing a nonlinear function subject to nonlinear inequality constraints is described. It applies sequential quadratic programming techniques to a sequence of barrier problems, and uses trust regions to ensure the robustness of the iteration and to allow the direct use of second order derivatives. This framework permits primal and primal-dual steps, but the paper focuses on the primal version of the new algorithm. An analysis of the convergence properties of this method is presented.


Archive | 2006

Knitro: An Integrated Package for Nonlinear Optimization

Richard H. Byrd; Jorge Nocedal; Richard A. Waltz

This paper describes Knitro 5.0, a C-package for nonlinear optimization that combines complementary approaches to nonlinear optimization to achieve robust performance over a wide range of application requirements. The package is designed for solving large-scale, smooth nonlinear programming problems, and it is also effective for the following special cases: unconstrained optimization, nonlinear systems of equations, least squares, and linear and quadratic programming. Various algorithmic options are available, including two interior methods and an active-set method. The package provides crossover techniques between algorithmic options as well as automatic selection of options and settings.


Mathematical Programming | 1994

Representations of quasi-Newton matrices and their use in limited memory methods

Richard H. Byrd; Jorge Nocedal; Robert B. Schnabel

We derive compact representations of BFGS and symmetric rank-one matrices for optimization. These representations allow us to efficiently implement limited memory methods for large constrained optimization problems. In particular, we discuss how to compute projections of limited memory matrices onto subspaces. We also present a compact representation of the matrices generated by Broydens update for solving systems of nonlinear equations.


Mathematical Programming | 1988

Approximate solution of the trust region problem by minimization over two-dimensional subspaces

Richard H. Byrd; Robert B. Schnabel; Gerald A. Shultz

The trust region problem, minimization of a quadratic function subject to a spherical trust region constraint, occurs in many optimization algorithms. In a previous paper, the authors introduced an inexpensive approximate solution technique for this problem that involves the solution of a two-dimensional trust region problem. They showed that using this approximation in an unconstrained optimization algorithm leads to the same theoretical global and local convergence properties as are obtained using the exact solution to the trust region problem. This paper reports computational results showing that the two-dimensional minimization approach gives nearly optimal reductions in then-dimension quadratic model over a wide range of test cases. We also show that there is very little difference, in efficiency and reliability, between using the approximate or exact trust region step in solving standard test problems for unconstrained optimization. These results may encourage the application of similar approximate trust region techniques in other contexts.


SIAM Journal on Numerical Analysis | 1987

A Trust Region Algorithm for Nonlinearly Constrained Optimization.

Richard H. Byrd; Robert B. Schnabel; Gerald A. Shultz

We present a trust region-based method for the general nonlinearly equality constrained optimization problem. The method works by iteratively minimizing a quadratic model of the Lagrangian subject ...


SIAM Journal on Numerical Analysis | 1989

A tool for the analysis of Quasi-Newton methods with application to unconstrained minimization

Richard H. Byrd; Jorge Nocedal

The BFGS update formula is shown to have an important property that is inde- pendent of the algorithmic context of the update, and that is relevant to both constrained and unconstrained optimization. The BFGS method for unconstrained optimization, using a variety of line searches, including backtracking, is shown to be globally and superlinearly convergent on uniformly convex problems. The analysis is particularly simple due to the use of some new tools introduced in this paper.


SIAM Journal on Numerical Analysis | 1987

GLOBAL CONVERGENCE OF A CLASS OF QUASI-NEWTON METHODS ON CONVEX PROBLEMS.

Richard H. Byrd; Jorge Nocedal; Ya-Xiang Yuan

We study the global convergence properties of the restricted Broyden class of quasi-Newton methods, when applied to a convex objective function. We assume that the line search satisfies a standard sufficient decrease condition and that the initial Hessian approximation is any positive definite matrix. We show global and superlinear convergence for this class of methods, except for DFP. This generalizes Powell’s well-known result for the BFGS method. The analysis gives us insight into the properties of these algorithms; in particular it shows that DFP lacks a very desirable self-correcting property possessed by BFGS.

Collaboration


Dive into the Richard H. Byrd's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robert B Schnabel

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Robert B. Schnabel

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar

Elizabeth Eskow

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar

Gerald A. Shultz

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ciyou Zhu

Northwestern University

View shared research outputs
Top Co-Authors

Avatar

Paul T. Boggs

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Silvia A. Crivelli

Lawrence Berkeley National Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge