Paul D. Hovland
Argonne National Laboratory
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Paul D. Hovland.
Scientific Programming | 1992
Christian H. Bischof; Alan Carle; G. Corliss; Andreas Griewank; Paul D. Hovland
The numerical methods employed in the solution of many scientific computing problems require the computation of derivatives of a function f
Archive | 2008
Christian H. Bischof; H. Martin Bücker; Paul D. Hovland; Uwe Naumann; Jean Utke
R^N
international conference on parallel processing | 2006
Michelle Mills Strout; Barbara Kreaseck; Paul D. Hovland
→
Proceedings of the 2004 workshop on Memory system performance | 2004
Michelle Mills Strout; Paul D. Hovland
R^m
Other Information: PBD: Apr 1995 | 1995
C. Bischof; P. Khademi; A. Mauer; Paul D. Hovland; Alan Carle
. Both the accuracy and the computational requirements of the derivative computation are usually of critical importance for the robustness and speed of the numerical solution. Automatic Differentiation of FORtran (ADIFOR) is a source transformation tool that accepts Fortran 77 code for the computation of a function and writes portable Fortran 77 code for the computation of the derivatives. In contrast to previous approaches, ADIFOR views automatic differentiation as a source transformation problem. ADIFOR employs the data analysis capabilities of the ParaScope Parallel Programming Environment, which enable us to handle arbitrary Fortran 77 codes and to exploit the computational context in the computation of derivatives. Experimental results show that ADIFOR can handle real-life codes and that ADIFOR-generated codes are competitive with divided-difference approximations of derivatives. In addition, studies suggest that the source transformation approach to automatic differentiation may improve the time to compute derivatives by orders of magnitude.
parallel computing | 2002
Boyana Norris; Satish Balay; Steven J. Benson; Lori A. Freitag; Paul D. Hovland; Lois Curfman McInnes; Barry F. Smith
This collection covers advances in automatic differentiation theory and practice. Computer scientists and mathematicians will learn about recent developments in automatic differentiation theory as well as mechanisms for the construction of robust and powerful automatic differentiation tools. Computational scientists and engineers will benefit from the discussion of various applications, which provide insight into effective strategies for using automatic differentiation for inverse problems and design optimization.
workshop on program analysis for software tools and engineering | 2005
Michelle Mills Strout; John M. Mellor-Crummey; Paul D. Hovland
Message passing via MPI is widely used in single-program, multiple-data (SPMD) parallel programs. Existing data-flow frameworks do not model the semantics of message-passing SPMD programs, which can result in less precise and even incorrect analysis results. We present a data-flow analysis framework for performing interprocedural analysis of message-passing SPMD programs. The framework is based on the MPI-ICFG representation, which is an interprocedural control-flow graph (ICFG) augmented with communication edges between possible send and receive pairs and partial context sensitivity. We show how to formulate nonseparable data-flow analyses within our framework using reaching constants as a canonical example. We also formulate and provide experimental results for the nonseparable analysis, activity analysis. Activity analysis is a domain-specific analysis used to reduce the computation and storage requirements for automatically differentiated MPI programs. Automatic differentiation is important for application domains such as climate modeling, electronic device simulation, oil reservoir simulation, medical treatment planning and computational economics to name a few. Our experimental results show that using the MPI-ICFG data-flow analysis framework improves the precision of activity analysis and as a result significantly reduces memory requirements for the automatically differentiated versions of a set of parallel benchmarks, including some of the NAS parallel benchmarks
international conference on supercomputing | 2010
Jaewook Shin; Mary W. Hall; Jacqueline Chame; Chun Chen; Paul F. Fischer; Paul D. Hovland
Irregular applications frequently exhibit poor performance on contemporary computer architectures, in large part because of their inefficient use of the memory hierarchy. Run-time data, and iteration-reordering transformations have been shown to improve the locality and therefore the performance of irregular benchmarks. This paper describes models for determining which combination of run-time data- and iteration-reordering heuristics will result in the best performance for a given dataset. We propose that the data- and iteration-reordering transformations be viewed as approximating minimal linear arrangements on two separate hypergraphs: a spatial locality hypergraph and a temporal locality hypergraph. Our results measure the efficacy of locality metrics based on these hypergraphs in guiding the selection of data-and iteration-reordering heuristics. We also introduce new iteration- and data-reordering heuristics based on the hypergraph models that result in better performance than do previous heuristics.
parallel computing | 2001
Paul D. Hovland; Lois Curfman McInnes
Automatic differentiation is a technique for computing the derivatives of functions described by computer programs. ADIFOR implements automatic differentiation by transforming a collection of FORTRAN 77 subroutines that compute a function {line_integral} into new FORTRAN 77 suborutines that compute the derivaties of the outputs of {line_integral} with respect to a specified set of inputs of {line_integral}. This guide describes step by step how to use version 2.0 of ADIFOR to generate derivative code. Familiarity with UNIX and FORTRAN 77 is assumed.
Archive | 2012
Shaun A. Forth; Paul D. Hovland; Eric Phipps; Jean Utke; Andrea Walther
High-performance simulations in computational science often involve the combined software contributions of multidisciplinary teams of scientists, engineers, mathematicians, and computer scientists. One goal of component-based software engineering in large-scale scientific simulations is to help manage such complexity by enabling better interoperability among codes developed by different groups. This paper discusses recent work on building component interfaces and implementations in parallel numerical toolkits for mesh manipulations, discretization, linear algebra, and optimization. We consider several motivating applications involving partial differential equations and unconstrained minimization to demonstrate this approach and evaluate performance.