Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Richard J. Hanson is active.

Publication


Featured researches published by Richard J. Hanson.


Mathematics of Computation | 1995

Solving least squares problems

Charles L. Lawson; Richard J. Hanson

Since the lm function provides a lot of features it is rather complicated. So we are going to instead use the function lsfit as a model. It computes only the coefficient estimates and the residuals. Now would be a good time to read the help file for lsfit. Note that lsfit supports the fitting of multiple least squares models and weighted least squares. Our function will not, hence we can omit the arguments wt, weights and yname. Also, changing tolerances is a little advanced so we will trust the default values and omit the argument tolerance as well.


ACM Transactions on Mathematical Software | 1979

Basic Linear Algebra Subprograms for Fortran Usage

Charles L. Lawson; Richard J. Hanson; David R. Kincaid; Fred T. Krogh

A package of 38 low level subprograms for many of the basic operations of numerical linear algebra is presented. The package is intended to be used with FORTRAN. The operations in the package are dot products, elementary vector operations, Givens transformations, vector copy and swap, vector norms, vector scaling, and the indices of components of largest magnitude. The subprograms and a test driver are available in portable FORTRAN. Versions of the subprograms are also provided in assembly language for the IBM 360/67, the CDC 6600 and CDC 7600, and the Univac 1108.


ACM Transactions on Mathematical Software | 1988

An extended set of FORTRAN basic linear algebra subprograms

Jack J. Dongarra; Jeremy Du Croz; Sven Hammarling; Richard J. Hanson

This paper describes an extension to the set of Basic Linear Algebra Subprograms. The extensions are targeted at matrix-vector operations that should provide for efficient and portable implementations of algorithms for high-performance computers.


ACM Transactions on Mathematical Software | 1988

Algorithm 656: an extended set of basic linear algebra subprograms: model implementation and test programs

Jack J. Dongarra; Jeremy Du Croz; Sven Hammarling; Richard J. Hanson

This paper describes a model implementation and test software for the Level 2 Basic Linear Algebra Subprograms (Level 2 BLAS). Level 2 BLAS are targeted at matrix-vector operations with the aim of providing more efficient, but portable, implementations of algorithms on high-performance computers. The model implementation provides a portable set of FORTRAN 77 Level 2 BLAS for machines where specialized implementations do not exist or are not required. The test software aims to verify that specialized implementations meet the specification of Level 2 BLAS and that implementations are correctly installed.


Mathematical Programming | 1981

An algorithm for linear least squares problems with equality and nonnegativity constraints

Karen Haskell; Richard J. Hanson

We present a new algorithm for solving a linear least squares problem with linear constraints. These are equality constraint equations and nonnegativity constraints on selected variables. This problem, while appearing to be quite special, is the core problem arising in the solution of the general linearly constrained linear least squares problem. The reduction process of the general problem to the core problem can be done in many ways. We discuss three such techniques.The method employed for solving the core problem is based on combining the equality constraints with differentially weighted least squares equations to form an augmented least squares system. This weighted least squares system, which is equivalent to a penalty function method, is solved with nonnegativity constraints on selected variables.Three types of examples are presented that illustrate applications of the algorithm. The first is rank deficient, constrained least squares curve fitting. The second is concerned with solving linear systems of algebraic equations with Hilbert matrices and bounds on the variables. The third illustrates a constrained curve fitting problem with inconsistent inequality constraints.


ACM Transactions on Mathematical Software | 1982

Algorithm 587: Two Algorithms for the Linearly Constrained Least Squares Problem

Richard J. Hanson; Karen Haskell

THIS SUBPROGRAM SOLVES A LINEARLY CONSTRAINED LEAST SQUARES PROBLEM. SUPPOSE THERE ARE GIVEN MATRICES E AND A OF RESPECTIVE DIMENSIONS ME BY N AND MA BY N, AND VECTORS F AND B OF RESPECTIVE LENGTHS ME AND MA. THIS SUBROUTINE SOLVES THE PROBLEM EX = F, (EQUATIONS TO BE EXACTLY SATISFIED) AX = B, (EQUATIONS TO BE APPROXIMATELY SATISFIED, IN THE LEAST SQUARES SENSE) SUBJECT TO COMPONENTS L+I,...,N NONNEGATIVE ANY VALUES ME.GE.0, MA.GE.0 AND 0. LE. L .LE.N ARE PERMITTED. THE PROBLEM IS REPOSED AS PROBLEM WNNLS (WT*E)X = (WT*F) ( A) ( B), (LEAST SQUARES) SUBJECT TO COMPONENTS L+I,...,N NONNEGATIVE. THE SUBPROGRAM CHOOSES THE HEAVY WEIGHT (OR PENALTY PARAMETER) WT THE PARAMETERS FOR WNNLS ARE


ACM Signum Newsletter | 1985

A proposal for an extended set of Fortran Basic Linear Algebra Subprograms

Jack J. Dongarra; Jeremy Du Croz; Sven Hammarling; Richard J. Hanson

This paper describes an extension to the set of Basic Linear Algebra Subprograms. The extensions proposed are targeted at matrix vector operations which should provide for more efficient and portable implementations of algorithms for high performance computers.


conference on high performance computing (supercomputing) | 2003

Automatic Type-Driven Library Generation for Telescoping Languages

Arun Chauhan; Cheryl McCosh; Ken Kennedy; Richard J. Hanson

Telescoping languages is a strategy to automatically generate highly-optimized domain-specific libraries. The key idea is to create specialized variants of library procedures through extensive offline processing. This paper describes a telescoping system, called ARGen, which generates high-performance Fortran or C libraries from prototype Matlab code for the linear algebra library, ARPACK. ARGen uses variable types to guide procedure specializations on possible calling contexts. ARGen needs to infer Matlab types in order to speculate on the possible variants of library procedures, as well as to generate code. This paper shows that our type-inference system is powerful enough to generate all the variants needed for ARPACK automatically from the Matlab development code. The ideas demonstrated here provide a basis for building a more general telescoping system for Matlab.


ACM Transactions on Mathematical Software | 1992

A quadratic-tensor model algorithm for nonlinear least-squares problems with linear constraints

Richard J. Hanson; Fred T. Krogh

A new algorithm is presented for solving nonlinear least-squares and nonlinear equation problems. The algorithm is based on approximating the nonlinear functions using the quadratic-tensor model proposed by Schnabel and Frank. The problem statement may include simple bounds or more general linear constraints on the unknowns. The algorithm uses a trust-region defined by a box containing the current values of the unknowns. The objective function (Euclidean length of the functions) is allowed to increase at intermediate steps. These increases are allowed as long as our predictor indicates that a new set of best values exists in the trust-region. There is logic provided to retreat to the current best values, should that be required. The computations for the model-problem require a constrained nonlinear least-squares solver. This is done using a simpler version of the algorithm. In its present form the algorithm is effective for problems with linear constraints and dense Jacobian matrices. Results on standard test problems are presented in the Appendix. The new algorithm appears to be efficient in terms of function and Jacobian evaluations.


ACM Transactions on Mathematical Software | 1987

Algorithm 653: Translation of algorithm 539: PC-BLAS, basic linear algebra subprograms for FORTRAN usage with the INTEL 8087, 80287 numeric data processor

Richard J. Hanson; Fred T. Krogh

The Basic Linear Algebra Subprograms (BLAS) are described in [l]. The particular implementation documented here is intended for any of the FORTRAN compilers, [2-41, that run on MS-DOS and PC-DOS operating systems. Source code is provided for an Assembly language implementation of these subprograms, which are designed so that the computation is independent of the interface with the calling program unit. In fact, each of the compilers have different methods of passing pointers to input argument lists and returning results for functions. The independence of the mathematical operations from the particulars of the compiler was achieved by the judicious use of Assembly language macros. We believe that it is now a relatively easy job to write these macros for a FORTRAN compiler that is not on our list. The Assembly language versions of the PC-BLAS are generally more efficient when used in applications than are the FORTRAN versions. (See Appendix B for a brief rationale based on efficiency.) Usage of this code requires that the machine have an 8087 or 80287 Numeric Data Processor. The Assembly code for this translation can be assembled using the product of [5]. That product must be acquired by the reader separately; it is not included here. FORTRAN

Collaboration


Dive into the Richard J. Hanson's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fred T. Krogh

Jet Propulsion Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sven Hammarling

Numerical Algorithms Group

View shared research outputs
Top Co-Authors

Avatar

Jeremy Du Croz

Numerical Algorithms Group

View shared research outputs
Top Co-Authors

Avatar

David R. Kincaid

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John A. Wisniewski

Sandia National Laboratories

View shared research outputs
Researchain Logo
Decentralizing Knowledge