Andreas Griewank
Humboldt University of Berlin
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Andreas Griewank.
ACM Transactions on Mathematical Software | 1996
Andreas Griewank; David W. Juedes; Jean Utke
The C++ package ADOL-C described here facilitates the evaluation of first and higher derivatives of vector functions that are defined by computer programs written in C or C++. The resulting derivat...The C++ package ADOL-C described here facilitates the evaluation of first and higher derivatives of vector functions that are defined by computer programs written in C or C++. The resulting derivative evaluation routines may be called from C/C++, Fortran, or any other language that can be linked with C. The numerical values of derivative vectors are obtained free of truncation errors at a small multiple of the run-time and randomly accessed memory of the given function evaluation program. Derivative matrices are obtained by columns or rows. For solution curves defined by ordinary differential equations, special routines are provided that evaluate the Taylor coefficient vectors and their Jacobians with respect to the current state vector. The derivative calculations involve a possibly substantial (but always predictable) amount of data that are accessed strictly sequentially and are therefore automatically paged out to external files.
Archive | 2008
Andreas Griewank; Andrea Walther
Algorithmic, or automatic, differentiation (AD) is a growing area of theoretical research and software development concerned with the accurate and efficient evaluation of derivatives for function evaluations given as computer programs. The resulting derivative values are useful for all scientific computations that are based on linear, quadratic, or higher order approximations to nonlinear scalar or vector functions. AD has been applied in particular to optimization, parameter identification, nonlinear equation solving, the numerical integration of differential equations, and combinations of these. Apart from quantifying sensitivities numerically, AD also yields structural dependence information, such as the sparsity pattern and generic rank of Jacobian matrices. The field opens up an exciting opportunity to develop new algorithms that reflect the true cost of accurate derivatives and to use them for improvements in speed and reliability. This second edition has been updated and expanded to cover recent developments in applications and theory, including an elegant NP completeness argument by Uwe Naumann and a brief introduction to scarcity, a generalization of sparsity. There is also added material on checkpointing and iterative differentiation. To improve readability the more detailed analysis of memory and complexity bounds has been relegated to separate, optional chapters.The book consists of three parts: a stand-alone introduction to the fundamentals of AD and its software; a thorough treatment of methods for sparse problems; and final chapters on program-reversal schedules, higher derivatives, nonsmooth problems and iterative processes. Each of the 15 chapters concludes with examples and exercises. Audience: This volume will be valuable to designers of algorithms and software for nonlinear computational problems. Current numerical software users should gain the insight necessary to choose and deploy existing AD software tools to the best advantage. Contents: Rules; Preface; Prologue; Mathematical Symbols; Chapter 1: Introduction; Chapter 2: A Framework for Evaluating Functions; Chapter 3: Fundamentals of Forward and Reverse; Chapter 4: Memory Issues and Complexity Bounds; Chapter 5: Repeating and Extending Reverse; Chapter 6: Implementation and Software; Chapter 7: Sparse Forward and Reverse; Chapter 8: Exploiting Sparsity by Compression; Chapter 9: Going beyond Forward and Reverse; Chapter 10: Jacobian and Hessian Accumulation; Chapter 11: Observations on Efficiency; Chapter 12: Reversal Schedules and Checkpointing; Chapter 13: Taylor and Tensor Coefficients; Chapter 14: Differentiation without Differentiability; Chapter 15: Implicit and Iterative Differentiation; Epilogue; List of Figures; List of Tables; Assumptions and Definitions; Propositions, Corollaries, and Lemmas; Bibliography; Index
Scientific Programming | 1992
Christian H. Bischof; Alan Carle; G. Corliss; Andreas Griewank; Paul D. Hovland
The numerical methods employed in the solution of many scientific computing problems require the computation of derivatives of a function f
Journal of Optimization Theory and Applications | 1981
Andreas Griewank
R^N
ACM Transactions on Mathematical Software | 2000
Andreas Griewank; Andrea Walther
→
Archive | 2002
George F. Corliss; Christèle Faure; Andreas Griewank; Laurent Hascoët; Uwe Naumann
R^m
Numerische Mathematik | 1982
Andreas Griewank; Ph. L. Toint
. Both the accuracy and the computational requirements of the derivative computation are usually of critical importance for the robustness and speed of the numerical solution. Automatic Differentiation of FORtran (ADIFOR) is a source transformation tool that accepts Fortran 77 code for the computation of a function and writes portable Fortran 77 code for the computation of the derivatives. In contrast to previous approaches, ADIFOR views automatic differentiation as a source transformation problem. ADIFOR employs the data analysis capabilities of the ParaScope Parallel Programming Environment, which enable us to handle arbitrary Fortran 77 codes and to exploit the computational context in the computation of derivatives. Experimental results show that ADIFOR can handle real-life codes and that ADIFOR-generated codes are competitive with divided-difference approximations of derivatives. In addition, studies suggest that the source transformation approach to automatic differentiation may improve the time to compute derivatives by orders of magnitude.
Numerische Mathematik | 1982
Andreas Griewank; Ph. L. Toint
This paper introduces a new method for the global unconstrained minimization of a differentiable objective function. The method is based on search trajectories, which are defined by a differential equation and exhibit certain similarities to the trajectories of steepest descent. The trajectories depend explicitly on the value of the objective function and aim at attaining a given target level, while rejecting all larger local minima. Convergence to the gloal minimum can be proven for a certain class of functions and appropriate setting of two parameters.
Siam Review | 1985
Andreas Griewank
In its basic form, the reverse mode of computational differentiation yields the gradient of a scalar-valued function at a cost that is a small multiple of the computational work needed to evaluate the function itself. However, the corresponding memory requirement is proportional to the run-time of the evaluation program. Therefore, the practical applicability of the reverse mode in its original formulation is limited despite the availability of ever larger memory systems. This observation leads to the development of checkpointing schedules to reduce the storage requirements. This article presents the function revolve, which generates checkpointing schedules that are provably optimal with regard to a primary and a secondary criterion. This routine is intended to be used as an explicit “controller” for running a time-dependent applications program.
Optimization Methods & Software | 2002
Andreas Griewank; Andrea Walther
Automatic Differentiation (AD) is a maturing computational technology. It has become a mainstream tool used by practicing scientists and computer engineers. The rapid advance of hardware computing power and AD tools has enabled practitioners to generate derivative enhanced versions of their code for a broad range of applications in applied research and development. Automatic Differentiation of Algorithms provides a comprehensive and authoritative survey of all recent developments, new techniques, and tools for AD use.