Robert B. Schnabel
University of Colorado Boulder
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Robert B. Schnabel.
Mathematical Programming | 1994
Richard H. Byrd; Jorge Nocedal; Robert B. Schnabel
We derive compact representations of BFGS and symmetric rank-one matrices for optimization. These representations allow us to efficiently implement limited memory methods for large constrained optimization problems. In particular, we discuss how to compute projections of limited memory matrices onto subspaces. We also present a compact representation of the matrices generated by Broydens update for solving systems of nonlinear equations.
Mathematical Programming | 1988
Richard H. Byrd; Robert B. Schnabel; Gerald A. Shultz
The trust region problem, minimization of a quadratic function subject to a spherical trust region constraint, occurs in many optimization algorithms. In a previous paper, the authors introduced an inexpensive approximate solution technique for this problem that involves the solution of a two-dimensional trust region problem. They showed that using this approximation in an unconstrained optimization algorithm leads to the same theoretical global and local convergence properties as are obtained using the exact solution to the trust region problem. This paper reports computational results showing that the two-dimensional minimization approach gives nearly optimal reductions in then-dimension quadratic model over a wide range of test cases. We also show that there is very little difference, in efficiency and reliability, between using the approximate or exact trust region step in solving standard test problems for unconstrained optimization. These results may encourage the application of similar approximate trust region techniques in other contexts.
SIAM Journal on Numerical Analysis | 1987
Richard H. Byrd; Robert B. Schnabel; Gerald A. Shultz
We present a trust region-based method for the general nonlinearly equality constrained optimization problem. The method works by iteratively minimizing a quadratic model of the Lagrangian subject ...
Technometrics | 1987
Janet R. Donaldson; Robert B. Schnabel
We present the results of a Monte Carlo study of the leading methods for constructing approximate confidence regions and confidence intervals for parameters estimated by nonlinear least squares. We examine three variants of the linearization method, the likelihood method, and the lack-of-fit method. The linearization method is computationally inexpensive, produces easily understandable results, and is widely used in practice. The likelihood and lack-of-fit methods are much more expensive and more difficult to report. In our tests, both the likelihood and lack-of-fit methods perform very reliably. All three variants of the linearization method, however, often grossly underestimate confidence regions and sometimes significantly underestimate confidence intervals. The linearization method variant based solely on the Jacobian matrix appears preferable to the two variants that use the full Hessian matrix because it is less expensive, more numerically stable, and at least as accurate. The Bates and Watts curvat...
SIAM Journal on Numerical Analysis | 1985
Gerald A. Shultz; Robert B. Schnabel; Richard H. Byrd
This paper has two aims: to exhibit very general conditions under which members of a broad class of unconstrained minimization algorithms are globally convergent in a strong sense, and to propose several new algorithms that use second derivative information and achieve such convergence. In the first part of the paper we present a general trust-region-based algorithm schema that includes an undefined step selection strategy. We give general conditions on this step selection strategy under which limit points of the algorithm will satisfy first and second order necessary conditions for unconstrained minimization. Our algorithm schema is sufficiently broad to include line search algorithms as well. Next, we show that a wide range of step selection strategies satisfy the requirements of our convergence theory. This leads us to propose several new algorithms that use second derivative information and achieve strong global convergence, including an indefinite line search algorithm, several indefinite dogleg algo...
ACM Transactions on Mathematical Software | 1985
Robert B. Schnabel; John E. Koonatz; Barry Weiss
We describe a new package, UNCMIN, for finding a local minimizer of a real valued function of more than one variable. The novel feature of UNCMIN is that it is a modular system of algorithms, containing three different step selection strategies (line search, dogleg, and optimal step) that may be combined with either analytic or finite difference gradient evaluation and with either analytic, finite difference, or BFGS Hessian approximation. We present the results of a comparison of the three step selection strategies on the problems in More, Garbow, and Hillstrom in two separate cases: using finite difference gradients and Hessians, and using finite difference gradients with BFGS Hessian approximations. We also describe a second package, REVMIN, that uses optimization algorithms identical to UNCMIN but obtains values of user-supplied functions by reverse communication.
ACM Transactions on Mathematical Software | 1989
Paul T. Boggs; Janet R. Donaldson; Richaard h. Byrd; Robert B. Schnabel
In this paper, we describe ODRPACK, a software package for the weighted orthogonal distance regression problem. This software is an implementation of the algorithm described in [2] for finding the parameters that minimize the sum of the squared weighted orthogonal distances from a set of observations to a curve or surface determined by the parameters. It can also be used to solve the ordinary nonlinear least squares problem. The weighted orthogonal distance regression procedure application to curve and surface fitting and to measurement error models in statistics. The algorithm implemented is an efficient and stable trust region (Levenberg-Marquardt) procedure that exploits the structure of the problem so that the computational cost per iteration is equal to that for the same type of algorithm applied to the ordinary nonlinear least squares problem. The package allows a general weighting scheme, provides for finite difference derivatives, and contains extensive error checking and report generating facilities.
Siam Review | 1978
John E. Dennis; Robert B. Schnabel
In many problems involving the solution of a system of nonlinear equations, it is necessary to keep an approximation to the Jacobian matrix which is updated at each iteration. Computational experience indicates that the best updates are those that minimize some reasonable measure of the change to the current Jacobian approximation subject to the new approximation obeying a secant condition and perhaps some other approximation properties such as symmetry. In this paper we extend the affine case of a theorem of Cheney and Goldstein on proximity maps of convex sets to show that a generalization of the symmetrization technique of Powell always generates least change updates. This generalization has such broad applicability that we obtain an easy unified derivation of all the most successful updates. Furthermore, our techniques apply to interesting new cases such as when the secant condition might be inconsistent with some essential approximation property like sparsity. We also offer advice on how to choose the properties which are to be incorporated into the approximations and how to choose the measure of changes to be minimized.
Siam Journal on Scientific and Statistical Computing | 1990
Robert B. Schnabel; Elizabeth Eskow
The modified Cholesky factorization of Gill and Murray plays an important role in optimization algorithms. Given a symmetric but not necessarily positive-definite matrix A, it computes a Cholesky factorization of
SIAM Journal on Numerical Analysis | 1984
Robert B. Schnabel; Paul D Frank
A + E