Keith R. Dalbey
Sandia National Laboratories
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Keith R. Dalbey.
Archive | 2011
Michael S. Eldred; Dena M. Vigil; Keith R. Dalbey; William J. Bohnhoff; Brian M. Adams; Laura Painton Swiler; Sophia Lefantzi; Patricia Diane Hough; John P. Eddy
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a e xible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for
Archive | 2014
Brian M. Adams; Mohamed S. Ebeida; Michael S. Eldred; John Davis Jakeman; Laura Painton Swiler; John Adam Stephens; Dena M. Vigil; Timothy Michael Wildey; William J. Bohnhoff; John P. Eddy; Kenneth T. Hu; Keith R. Dalbey; Lara E Bauman; Patricia Diane Hough
The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quantification, and optimization under uncertainty that provide the foundation for many of Dakota’s iterative analysis capabilities. Dakota Version 6.1 Theory Manual generated on November 7, 2014
ACM Transactions on Graphics | 2014
Mohamed S. Ebeida; Anjul Patney; Scott A. Mitchell; Keith R. Dalbey; Andrew A. Davidson; John D. Owens
We formalize sampling a function using k-d darts. A k-d Dart is a set of independent, mutually orthogonal, k-dimensional hyperplanes called k-d flats. A dart has d choose k flats, aligned with the coordinate axes for efficiency. We show k-d darts are useful for exploring a functions properties, such as estimating its integral, or finding an exemplar above a threshold. We describe a recipe for converting some algorithms from point sampling to k-d dart sampling, if the function can be evaluated along a k-d flat. We demonstrate that k-d darts are more efficient than point-wise samples in high dimensions, depending on the characteristics of the domain: for example, the subregion of interest has small volume and evaluating the function along a flat is not too expensive. We present three concrete applications using line darts (1-d darts): relaxed maximal Poisson-disk sampling, high-quality rasterization of depth-of-field blur, and estimation of the probability of failure from a response surface for uncertainty quantification. Line darts achieve the same output fidelity as point sampling in less time. For Poisson-disk sampling, we use less memory, enabling the generation of larger point distributions in higher dimensions. Higher-dimensional darts provide greater accuracy for a particular volume estimation problem.
13th AIAA/ISSMO Multidisciplinary Analysis Optimization Conference | 2010
Keith R. Dalbey; George N. Karystinos
Latin Hypercube Sampling (LHS) and Jittered Sampling (JS) both achieve better convergence than standard Monte Carlo Sampling (MCS) by using stratification to obtain a more uniform selection of samples, although LHS and JS use different stratification strategies. The “Koksma-Hlawka-like inequality” bounds the error in a computed mean in terms of the sample design’s discrepancy, which is a common metric of uniformity. However, even the “fast” formulas available for certain useful L2 norm discrepancies requireO ( NM )
Archive | 2010
Michael S. Eldred; Keith R. Dalbey; William J. Bohnhoff; Brian M. Adams; Laura Painton Swiler; Patricia Diane Hough; John P. Eddy; Karen Haskell
Archive | 2013
Keith R. Dalbey
International Journal for Uncertainty Quantification | 2014
Keith R. Dalbey; Laura Painton Swiler
Computer Methods in Applied Mechanics and Engineering | 2016
Hossein Aghakhani; Keith R. Dalbey; David Salac; Abani K. Patra
Archive | 2015
Keith R. Dalbey; Brian M. Adams; John Adam Stephens; Laura Painton Swiler
International Journal for Uncertainty Quantification | 2011
Keith R. Dalbey; George N. Karystinos