Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Brian M. Adams is active.

Publication


Featured researches published by Brian M. Adams.


Archive | 2011

DAKOTA : a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis.

Michael S. Eldred; Dena M. Vigil; Keith R. Dalbey; William J. Bohnhoff; Brian M. Adams; Laura Painton Swiler; Sophia Lefantzi; Patricia Diane Hough; John P. Eddy

The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a e xible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for


11th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference | 2006

Reliability-Based Design Optimization for Shape Design of Compliant Micro-Electro-Mechanical Systems.

Brian M. Adams; Michael S. Eldred; Jonathan W. Wittwer

Reliability methods are probabilistic algorithms for quantifying the effect of uncertainties on response metrics of interest. In particular, they compute approximate response function distribution statistics (probability, reliability, and response levels) based on specified probability distributions for input random variables. In conjunction with simulation software, these reliability analysis methods may be employed within reliability-based design optimization (RBDO) algorithms for designing systems subject to probabilistic performance criteria. In this paper, RBDO methods are compared and their effectiveness demonstrated by application to design optimization of microelectromechanical systems (MEMS), devices for which uncertainties in material properties and geometry affect performance and reliability. A new tapered beam topology for a fully compliant bistable mechanism is presented and its geometry optimized with RBDO to reliably achieve a specified actuation force, while simultaneously reducing predicted force variability due to material properties and manufacturing. The optimal designs specified by these optimization processes are predicted to be reliable, but also more robust to manufacturing process variations. Software-based MEMS design illustrates challenges faced when applying RBDO methods in engineering contexts.


Archive | 2014

Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis version 6.0 theory manual

Brian M. Adams; Mohamed S. Ebeida; Michael S. Eldred; John Davis Jakeman; Laura Painton Swiler; John Adam Stephens; Dena M. Vigil; Timothy Michael Wildey; William J. Bohnhoff; John P. Eddy; Kenneth T. Hu; Keith R. Dalbey; Lara E Bauman; Patricia Diane Hough

The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quantification, and optimization under uncertainty that provide the foundation for many of Dakota’s iterative analysis capabilities. Dakota Version 6.1 Theory Manual generated on November 7, 2014


12th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference | 2008

Model Calibration under Uncertainty: Matching Distribution Information

Laura Painton Swiler; Brian M. Adams; Michael S. Eldred

*† ‡ We develop an approach for estimating model parameters which result in the “best distribution fit” between experimental and simulation data. Best distribution fit means matching moments of experimental data to those of a simulation (and possibly matching a full probability distribution). This approach extends typical nonlinear least squares methods which identify parameters maximizing agreement between experimental points and computational simulation results. Several analytic formulations for the distribution matching problem are provided, along with results for solving test problems and comparisons of this parameter estimation technique with a deterministic least squares approach. I. Introduction Nonlinear models are frequently used to model physical phenomena, including engineering applications. In this paper, we refer to a nonlinear model very broadly: the output of the model is a nonlinear function of the parameters [Draper98]. Thus, nonlinear models can include systems of partial differential equations (PDEs). Some examples include CFD (computational fluid dynamics), groundwater flow, heat transport, etc. Nonlinear models also include functional approximations of uncertain data via regression or response surface models. In most cases, we have some type of simulation model which is a nonlinear model, so we use the term nonlinear model and simulation model interchangeably in this paper. In addition to a simulation model, we assume that we have experimental data which may be used to calibrate the model. The calibration often requires the solution of an optimization problem to determine the optimal parameter settings for the simulation model. In this paper, we are concerned with identifying model parameters which result in a “best fit” between experimental data and simulation results in a nondeterministic context. That is, instead of matching point estimates we are concerned with matching moments (e.g., mean or variance) between experimental and simulation data where the variability in the model output is due to parametric uncertainty. We develop an approach extending the typical nonlinear least squares formulation to allow for this distribution matching. Note that “parameters” may be parameters in an approximation model such as a regression model, or physics modeling parameters which are used in physical simulation models such as PDEs. We distinguish data from parameters: data are physical data which are input either to a regression or physical simulation. For example, in groundwater flow modeling, hydraulic conductivity is a parameter and data may include measured flow rates from well tests. In this paper, we denote parameters that will be calibrated as θ, and the independent input data (e.g., state variables, configuration data, boundary conditions) as x. We also assume that there are uncertain variables, denoted by u, that represent inherent variability or lack of exact knowledge influencing the simulation, but which we cannot observe. The effect of the uncertain variables is reflected in both the output variability of the nonlinear model and the experimental data, but we can only explicitly account for these uncertain variables in the simulation model. Thus, our simulation model, f,


Archive | 2006

DAKOTA, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis:version 4.0 reference manual

Joshua D. Griffin; Michael S. Eldred; Monica L. Martinez-Canales; Jean-Paul Watson; Tamara G. Kolda; Anthony A. Giunta; Brian M. Adams; Laura Painton Swiler; Pamela J. Williams; Patricia Diane Hough; Daniel M. Dunlavy; John P. Eddy; William Eugene Hart; Shannon L. Brown

The Dakota toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogatebased optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user’s manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies. Dakota Version 6.11 User’s Manual generated on November 7, 2019


Archive | 2016

User Guidelines and Best Practices for CASL VUQ Analysis Using Dakota.

Brian M. Adams; Kayla D. Coleman; Russell Hooper; Bassam A. Khuwaileh; Allison Lewis; Ralph C. Smith; Laura Painton Swiler; Paul J. Turinsky; Brian W. Williams

Sandias Dakota software (available at http://dakota.sandia.gov) supports science and engineering transformation through advanced exploration of simulations. Specifically it manages and analyzes ensembles of simulations to provide broader and deeper perspective for analysts and decision makers. This enables them to enhance understanding of risk, improve products, and assess simulation credibility. This manual offers Consortium for Advanced Simulation of Light Water Reactors (LWRs) (CASL) partners a guide to conducting Dakota-based VUQ studies for CASL problems. It motivates various classes of Dakota methods and includes examples of their use on representative application problems. On reading, a CASL analyst should understand why and how to apply Dakota to a simulation problem. This SAND report constitutes the product of CASL milestone L3:VUQ.V&V.P8.01 and is also being released as a CASL unlimited release report with number CASL-U-2014-0038-000.


48th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference | 2007

Solution-Verified Reliability Analysis and Design of Compliant Micro-Electro-Mechanical Systems

Michael S. Eldred; Brian M. Adams; Kevin D. Copps; Brian Carnes; Patrick K. Notz; Matthew M. Hopkins; Jonathan W. Wittwer

An important component of verification and validation of computational models is solution verification, which focuses on the convergence of the desired solution quantities as one refines the spatial and temporal discretizations and iterative controls. Uncertainty analyses often treat solution verification as a separate issue, hopefully through the use of a priori grid convergence studies and selection of models with acceptable discretization errors. In this paper, a tighter connection between solution verification and uncertainty quantification is investigated. In particular, error estimation techniques, using global norm and quantity of interest error estimators, are applied to the nonlinear structural analysis of microelectromechanical systems (MEMS). Two primary approaches for uncertainty quantification are then developed: an error-corrected approach, in which simulation results are directly corrected for discretization errors, and an error-controlled approach, in which estimators are used to drive adaptive h-refinement of mesh discretizations. The former requires quantity of interest error estimates that are quantitatively accurate, whereas the latter can employ any estimator that is qualitatively accurate. Combinations of these error-corrected and error-controlled approaches are also explored. Each of these techniques treats solution verification and uncertainty analysis as a coupled problem, recognizing that the simulation errors may be influenced by, for example, conditions present in the tails of input probability distributions. The most effective and affordable of these approaches are carried forward in probabilistic design studies for robust and reliable operation of a bistable MEMS device. Computational results show that on-line and parameter-adaptive solution verification can lead to uncertainty quantification and design under uncertainty studies that are more accurate, efficient, reliable, and convenient.


Archive | 2006

Solution-verified reliability analysis and design of bistable MEMS using error estimation and adaptivity.

Michael S. Eldred; Samuel R. Subia; David Neckels; Matthew M. Hopkins; Patrick K. Notz; Brian M. Adams; Brian Carnes; Jonathan W. Wittwer; Barron J. Bichon; Kevin D. Copps

This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.


53rd AIAA Aerospace Sciences Meeting | 2015

Overview of Selected DOE/NNSA Predictive Science Initiatives: the Predictive Science Academic Alliance Program and the DAKOTA Project (Invited)

Michael S. Eldred; Laura Painton Swiler; Brian M. Adams

This paper supports a special session on “Frontiers of Uncertainty Management for Complex Aerospace Systems” with the intent of summarizing two aspects of the DOE/NNSA Accelerated Strategic Computing (ASC) program, each of which is focused on predictive science using complex simulation models. The first aspect is academic outreach, as enabled by the Predictive Science Academic Alliance Program (PSAAP). The second aspect is the Dakota project at Sandia National Laboratories, which develops and deploys uncertainty quantification capabilities focused on high fidelity modeling and simulation on large-scale parallel computers.


Archive | 2013

DAKOTA JAGUAR 3.0 user's manual.

Brian M. Adams; Lara E Bauman; Ethan Chan; Sophia Lefantzi; Joseph. Ruthruff

JAGUAR (JAva GUi for Applied Research) is a Java software tool providing an advanced text editor and graphical user interface (GUI) to manipulate DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) input specifications. This document focuses on the features necessary to use JAGUAR.

Collaboration


Dive into the Brian M. Adams's collaboration.

Top Co-Authors

Avatar

Laura Painton Swiler

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Michael S. Eldred

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Keith R. Dalbey

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

V. Gregory Weirs

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

William J. Rider

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar

James R. Kamm

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Jonathan W. Wittwer

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Sophia Lefantzi

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Anthony A. Giunta

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Barron J. Bichon

Sandia National Laboratories

View shared research outputs
Researchain Logo
Decentralizing Knowledge