Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David F. Shanno is active.

Publication


Featured researches published by David F. Shanno.


Mathematics of Computation | 1970

Conditioning of Quasi-Newton Methods for Function Minimization

David F. Shanno

Quasi-Newton methods accelerate the steepest-descent technique for function minimization by using computational history to generate a sequence of approximations to the inverse of the Hessian matrix. This paper presents a class of approximating matrices as a function of a scalar parameter. The problem of optimal conditioning of these matrices under an appropriate norm as a function of the scalar parameter is investigated. A set of computational results verifies the superiority of the new methods arising from conditioning considerations to known methods.


Computational Optimization and Applications | 1999

An Interior-Point Algorithm for Nonconvex Nonlinear Programming

Robert J. Vanderbei; David F. Shanno

The paper describes an interior-point algorithm for nonconvex nonlinear programming which is a direct extension of interior-point methods for linear and quadratic programming. Major modifications include a merit function and an altered search direction to ensure that a descent direction for the merit function is obtained. Preliminary numerical testing indicates that the method is robust. Further, numerical comparisons with MINOS and LANCELOT show that the method is efficient, and has the promise of greatly reducing solution times on at least some classes of models.


Mathematics of Operations Research | 1978

Conjugate Gradient Methods with Inexact Searches

David F. Shanno

Conjugate gradient methods are iterative methods for finding the minimizer of a scalar function fx of a vector variable x which do not update an approximation to the inverse Hessian matrix. This paper examines the effects of inexact linear searches on the methods and shows how the traditional Fletcher-Reeves and Polak-Ribiere algorithm may be modified in a form discovered by Perry to a sequence which can be interpreted as a memorytess BFGS algorithm. This algorithm may then be scaled optimally in the sense of Oren and Spedicalo. This scaling can be combined with Beale restarts and Powells restart criterion. Computational results will show that this new method substantially outperforms known conjugate gradient methods on a wide class of problems.


Journal of Financial and Quantitative Analysis | 1987

Option Pricing when the Variance Is Changing

Herb Johnson; David F. Shanno

The Monte Carlo method is used to solve for the price of a call when the variance is changing stochastically.


Linear Algebra and its Applications | 1991

Computational experience with a primal-dual interior point method for linear programming

Irvin J. Lustig; Roy E. Marsten; David F. Shanno

Abstract A new comprehensive implementation of a primal-dual algorithm for linear programming is described. It allows for easy handling of simple bounds on the primal variables and incorporates free variables, which have not previously been included in a primal-dual implementation. We discuss in detail a variety of computational issues concerning the primal-dual implementation and barrier methods for linear programming in general. We show that, in a certain way, Lustigs method for obtaining feasibility is equivalent to Newtons method. This demonstrates that the method is in some sense the natural way to reduce infeasibility. The role of the barrier parameter in computational practice is studied in detail. Numerical results are given for the entire expanded NETLIB test set for the basic algorithm and its variants, as well as version 5.3 of MINOS .


ACM Transactions on Mathematical Software | 1980

Remark on “Algorithm 500: Minimization of Unconstrained Multivariate Functions [E4]”

David F. Shanno; K. H. Phua

The subroutine incorporates two nonlinear optimization methods, a conjugate gradient algorithm and a variable metric algorithm, with the choice of method left to the user. The conjugate gradient algorithm is the Beale restarted memoryless variable metric algorithm documented in Shanno [7]. This method requires approximately 7n double-precision words of working storage to be provided by the user. The variable metric method is the BFGS algorithm with initial scaling documented in Shanno and Phua [10], and required approximately n2/2 + l l n /2 double-precision words of working storage. Whichever method is chosen, the same linear search technique is used for both methods, with two differences. The basic linear search uses Davidons cubic interpolation to find a step length a, which satisfies


Siam Journal on Optimization | 1992

ON IMPLEMENTING MEHROTRA'S PREDICTOR-CORRECTOR INTERIOR-POINT METHOD FOR LINEAR PROGRAMMING*

Irvin J. Lustig; Roy E. Marsten; David F. Shanno

Mehrotra [Tech. Report 90-03, Department of Industrial Engineering and Management Sciences, Northwestern University, Evanston, IL, 1990] recently described a predictor–corrector variant of the primal–dual interior-point algorithm for linear programming. This paper describes a full implementation of this algorithm, with extensions for solving problems with free variables and problems with bounds on primal variables. Computational results on the NETLIB test set are given to show that this new method almost always improves the performance of the primal–dual algorithm and that the improvement increases dramatically as the size and complexity of the problem increases. A numerical instability in using Schur complements to remove dense columns is identified, and a numerical remedy is given.


Mathematical Programming | 1978

Matrix conditioning and nonlinear optimization

David F. Shanno; K. H. Phua

In a series of recent papers, Oren, Oren and Luenberger, Oren and Spedicato, and Spedicato have developed the self-scaling variable metric algorithms. These algorithms alter Broydens single parameter family of approximations to the inverse Hessian to a double parameter family. Conditions are given on the new parameter to minimize a bound on the condition number of the approximated inverse Hessian while insuring improved step-wise convergence.Davidon has devised an update which also minimizes the bound on the condition number while remaining in the Broyden single parameter family.This paper derives initial scalings for the approximate inverse Hessian which makes members of the Broyden class self-scaling. The Davidon, BFGS, and Oren—Spedicato updates are tested for computational efficiency and stability on numerous test functions, with the results indicating strong superiority computationally for the Davidon and BFGS update over the self-scaling update, except on a special class of functions, the homogeneous functions.


Informs Journal on Computing | 1989

An Implementation of a Primal-Dual Interior Point Method for Linear Programming

Kevin A. McShane; Clyde L. Monma; David F. Shanno

The purpose of this paper is to describe in detail an implementation of a primal-dual interior point method for solving linear programming problems. Preliminary computational results indicate that this implementation compares favorably with a comparable implementation of a dual affine interior point method, and with MINOS 5.0, a state-of-the-art implementation of the simplex method. INFORMS Journal on Computing , ISSN 1091-9856, was published as ORSA Journal on Computing from 1989 to 1995 under ISSN 0899-1499.


Informs Journal on Computing | 1994

Feature Article—Interior Point Methods for Linear Programming: Computational State of the Art

Irvin J. Lustig; Roy E. Marsten; David F. Shanno

A survey of the significant developments in the field of interior point methods for linear programming is presented, beginning with Karmarkars projective algorithm and concentrating on the many variants that can be derived from logarithmic barrier methods. Full implementation details of the primal-dual predictor-corrector code OB1 are given, including preprocessing, matrix orderings, and matrix factorization techniques. A computational comparison of OB1 with a state-of-the-art simplex code using eight large models is given. In addition, computational results are presented where OB1 is used to solve two very large models that have never been solved by any simplex code INFORMS Journal on Computing , ISSN 1091-9856, was published as ORSA Journal on Computing from 1989 to 1995 under ISSN 0899-1499.

Collaboration


Dive into the David F. Shanno's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

K. H. Phua

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

David M. Rocke

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge