Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bradley T Rearden is active.

Publication


Featured researches published by Bradley T Rearden.


Nuclear Science and Engineering | 2004

Perturbation Theory Eigenvalue Sensitivity Analysis with Monte Carlo Techniques

Bradley T Rearden

Abstract Methodologies to calculate adjoint-based first-order-linear perturbation theory sensitivity coefficients with multigroup Monte Carlo methods are developed, implemented, and tested in this paper. These techniques can quickly produce sensitivity coefficients for all nuclides and reaction types for each region of a system model. Monte Carlo techniques have been developed to calculate the neutron flux moments and/or angular fluxes necessary for the generation of the scattering terms of the sensitivity coefficients. The Tools for Sensitivity and Uncertainty Analysis Methodology Implementation in three dimensions (TSUNAMI-3D) control module has been written for the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system implementing this methodology. TSUNAMI-3D performs automated multigroup cross-section processing and then generates the forward and adjoint neutron fluxes with an enhanced version of the KENO V.a Monte Carlo code that implements the flux moment and angular flux calculational techniques. Sensitivity coefficients are generated with the newly developed Sensitivity Analysis Module for SCALE (SAMS). Results generated with TSUNAMI-3D compare favorably with results generated with direct perturbation techniques.


Nuclear Technology | 2011

Sensitivity and Uncertainty Analysis Capabilities and Data in SCALE

Bradley T Rearden; Mark L Williams; Matthew Anderson Jessee; Don Mueller; Dorothea Wiarda

Abstract In SCALE 6, the Tools for Sensitivity and UNcertainty Analysis Methodology Implementation (TSUNAMI) modules calculate the sensitivity of keff or reactivity differences to the neutron cross-section data on an energy-dependent, nuclide-reaction-specific basis. These sensitivity data are useful for uncertainty quantification, using the comprehensive neutron cross-section-covariance data in SCALE 6. Additional modules in SCALE 6 use the sensitivity and uncertainty data to produce correlation coefficients and other relational parameters that quantify the similarity of benchmark experiments to application systems for code validation purposes. Bias and bias uncertainties are quantified using parametric trending analysis or data adjustment techniques, providing detailed assessments of sources of biases and their uncertainties and quantifying gaps in experimental data available for validation. An example application of these methods is presented for a generic burnup credit cask model.


Nuclear Technology | 2011

Monte Carlo Criticality Methods and Analysis Capabilities in SCALE

Sedat Goluoglu; Lester M. Petrie; Michael E Dunn; Daniel F Hollenbach; Bradley T Rearden

Abstract This paper describes the Monte Carlo codes KENO V.a and KENO-VI in SCALE that are primarily used to calculate multiplication factors and flux distributions of fissile systems. Both codes allow explicit geometric representation of the target systems and are used internationally for safety analyses involving fissile materials. KENO V.a has limiting geometric rules such as no intersections and no rotations. These limitations make KENO V.a execute very efficiently and run very fast. On the other hand, KENO-VI allows very complex geometric modeling. Both KENO codes can utilize either continuous-energy or multigroup cross-section data and have been thoroughly verified and validated with ENDF libraries through ENDF/B-VII.0, which has been first distributed with SCALE 6. Development of the Monte Carlo solution technique and solution methodology as applied in both KENO codes is explained in this paper. Available options and proper application of the options and techniques are also discussed. Finally, performance of the codes is demonstrated using published benchmark problems.


Nuclear Technology | 2013

A Statistical Sampling Method for Uncertainty Analysis with SCALE and XSUSA

Mark L Williams; Germina Ilas; Matthew Anderson Jessee; Bradley T Rearden; Dorothea Wiarda; W. Zwermann; L. Gallner; M. Klein; B. Krzykacz-Hausmann; A. Pautz

A new statistical sampling sequence called Sampler has been developed for the SCALE code system. Random values for the input multigroup cross sections are determined by using the XSUSA program to sample uncertainty data provided in the SCALE covariance library. Using these samples, Sampler computes perturbed self-shielded cross sections and propagates the perturbed nuclear data through any specified SCALE analysis sequence, including those for criticality safety, lattice physics with depletion, and shielding calculations. Statistical analysis of the output distributions provides uncertainties and correlations in the desired responses, due to nuclear data uncertainties. The Sampler/XSUSA methodology is described, and example applications are shown for criticality safety and spent-fuel analysis.


Nuclear Science and Engineering | 2003

Use of sensitivity and uncertainty analysis to select benchmark experiments for the validation of computer codes and data

K. R. Elam; Bradley T Rearden

Abstract Sensitivity and uncertainty analysis methodologies under development at Oak Ridge National Laboratory were applied to determine whether existing benchmark experiments adequately cover the area of applicability for the criticality code and data validation of PuO2 and mixed-oxide (MOX) powder systems. The study examined three PuO2 powder systems and four MOX powder systems that would be useful for establishing mass limits for a MOX fuel fabrication facility. Using traditional methods to choose experiments for criticality analysis validation, 46 benchmark critical experiments were identified as applicable to the PuO2 powder systems. However, only 14 experiments were thought to be within the area of applicability for dry MOX powder systems. The applicability of 318 benchmark critical experiments, including the 60 experiments initially identified, was assessed. Each benchmark and powder system was analyzed using the Tools for Sensitivity and UNcertainty Analysis Methodology Implementation (TSUNAMI) one-dimensional (TSUNAMI-1D) or TSUNAMI three-dimensional (TSUNAMI-3D) sensitivity analysis sequences, which will be included in the next release of the SCALE code system. This sensitivity data and cross-section uncertainty data were then processed with TSUNAMI-IP to determine the correlation of each application to each experiment in the benchmarking set. Correlation coefficients are used to assess the similarity between systems and determine the applicability of one system for the code and data validation of another. The applicability of most of the experiments identified using traditional methods was confirmed by the TSUNAMI analysis. In addition, some PuO2 and MOX powder systems were determined to be within the area of applicability of several other benchmarks that would not have been considered using traditional methods. Therefore, the number of benchmark experiments useful for the validation of these systems exceeds the number previously expected. The TSUNAMI analysis also emphasized some areas where more benchmark data are needed, indicating the need for further evaluation of existing experiments, or possibly the completion of new experiments to fill these gaps. This lack of evaluated data is particularly important for very dry and dense MOX powder systems.


Nuclear Science and Engineering | 2016

SCALE Continuous-Energy Eigenvalue Sensitivity Coefficient Calculations

Christopher M. Perfetti; Bradley T Rearden; William R. Martin

Abstract The need to model geometrically complex systems with improved ease of use and fidelity and the desire to extend the Tools for Sensitivity and UNcertainty Analysis Methodology Implementation (TSUNAMI) analysis to advanced applications have motivated the development of a methodology for calculating sensitivity coefficients in continuous-energy (CE) Monte Carlo applications. The Contributon-Linked eigenvalue sensitivity/Uncertainty estimation via Track length importance CHaracterization (CLUTCH) and Iterated Fission Probability (IFP) eigenvalue sensitivity methods were recently implemented in the CE KENO framework of the SCALE code system to enable TSUNAMI-3D to perform eigenvalue sensitivity calculations using CE Monte Carlo methods. This paper provides a detailed description of the theory behind the CLUTCH method and describes in detail its implementation. This work also explores the improvements in eigenvalue sensitivity coefficient accuracy that can be gained through use of CE sensitivity methods and compares several sensitivity methods in terms of computational efficiency and memory requirements. The IFP and CLUTCH methods produced sensitivity coefficient estimates that matched, and in some cases exceeded, the accuracy of those produced using the multigroup TSUNAMI-3D approach. The CLUTCH method was found to calculate sensitivity coefficients with the highest degree of efficiency and the lowest computational memory footprint for the problems examined.


Archive | 2009

TSUNAMI Primer: A Primer for Sensitivity/Uncertainty Calculations with SCALE

Bradley T Rearden; Don Mueller; Stephen M. Bowman; Robert D. Busch; Scott Emerson

This primer presents examples in the application of the SCALE/TSUNAMI tools to generate k{sub eff} sensitivity data for one- and three-dimensional models using TSUNAMI-1D and -3D and to examine uncertainties in the computed k{sub eff} values due to uncertainties in the cross-section data used in their calculation. The proper use of unit cell data and need for confirming the appropriate selection of input parameters through direct perturbations are described. The uses of sensitivity and uncertainty data to identify and rank potential sources of computational bias in an application system and TSUNAMI tools for assessment of system similarity using sensitivity and uncertainty criteria are demonstrated. Uses of these criteria in trending analyses to assess computational biases, bias uncertainties, and gap analyses are also described. Additionally, an application of the data adjustment tool TSURFER is provided, including identification of specific details of sources of computational bias.


Nuclear Science and Engineering | 2016

Development of a Generalized Perturbation Theory Method for Sensitivity Analysis Using Continuous-Energy Monte Carlo Methods

Christopher M. Perfetti; Bradley T Rearden

Abstract The sensitivity and uncertainty analysis tools of the Oak Ridge National Laboratory SCALE nuclear modeling and simulation code system that have been developed over the last decade have proven indispensable for numerous application and design studies for nuclear criticality safety and reactor physics. SCALE contains tools for analyzing the uncertainty in the eigenvalue of critical systems with realistic three-dimensional Monte Carlo simulations but currently can only quantify the uncertainty in important neutronic parameters such as multigroup cross sections, fuel fission rates, activation rates, and neutron fluence rates with one- or two-dimensional models. A more complete understanding of the sources of uncertainty in these design-limiting parameters using high-fidelity models could lead to improvements in process optimization and reactor safety and help inform regulators when setting operational safety margins. A novel approach for calculating eigenvalue sensitivity coefficients, known as the CLUTCH (Contributon-Linked eigenvalue sensitivity/Uncertainty estimation via Track length importance CHaracterization) method, was recently explored as academic research and has been found to accurately and rapidly calculate sensitivity coefficients in criticality safety applications. The work presented here describes an extension of the CLUTCH method, known as the GEneralized Adjoint Responses in Monte Carlo (GEARMC) method, that enables the calculation of sensitivity coefficients and uncertainty analysis for a generalized set of neutronic responses using high-fidelity continuous-energy Monte Carlo calculations. Several criticality safety systems were examined to demonstrate proof of principle for the GEAR-MC method, and GEAR-MC produced response sensitivity coefficients that agreed well with reference direct perturbation sensitivity coefficients.


Nuclear Technology | 2005

Use of Sensitivity and Uncertainty Analysis in the Design of Reactor Physics and Criticality Benchmark Experiments for Advanced Nuclear Fuel

Bradley T Rearden; W. J. Anderson; G. A. Harms

Abstract Framatome ANP, Sandia National Laboratories (SNL), Oak Ridge National Laboratory (ORNL), and the University of Florida are cooperating on the U.S. Department of Energy Nuclear Energy Research Initiative (NERI) project 2001-0124 to design, assemble, execute, analyze, and document a series of critical experiments to validate reactor physics and criticality safety codes for the analysis of commercial power reactor fuels consisting of UO2 with 235U enrichments ≥5 wt%. The experiments will be conducted at the SNL Pulsed Reactor Facility. Framatome ANP and SNL produced two series of conceptual experiment designs based on typical parameters, such as fuel-to-moderator ratios, that meet the programmatic requirements of this project within the given restraints on available materials and facilities. ORNL used the Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI) to assess, from a detailed physics-based perspective, the similarity of the experiment designs to the commercial systems they are intended to validate. Based on the results of the TSUNAMI analysis, one series of experiments was found to be preferable to the other and will provide significant new data for the validation of reactor physics and criticality safety codes.


Monte Carlo 2000, Lisbon (PT), 10/23/2000--10/26/2000 | 2001

Sensitivity and uncertainty analysis for nuclear criticality safety using KENO in the SCALE code system

Bradley T Rearden

Sensitivity and uncertainty methods have been developed to aid in the establishment of areas of applicability and validation of computer codes and nuclear data for nuclear criticality safety studies. A key component in this work is the generation of sensitivity and uncertainty parameters for typically several hundred benchmarks experiments used in validation exercises. Previously, only one-dimensional sensitivity tools were available for this task, which necessitated the remodeling of multidimensional inputs in order for such an analysis to be performed. This paper describes the development of the SEN3 Monte Carlo based sensitivity analysis sequence for SCALE. Two options in the SEN3 package for the reconstruction of angular-dependent forward and adjoint fluxes are described and contrasted. These options are the direct calculation of flux moments versus the calculation of angular fluxes, with subsequent conversion to flux moments prior to sensitivity coefficient generation. The latter technique is found to be significantly more efficient.

Collaboration


Dive into the Bradley T Rearden's collaboration.

Top Co-Authors

Avatar

Mark L Williams

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

William Bj J Marshall

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Dorothea Wiarda

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Don Mueller

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Lester M. Petrie

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Michael E Dunn

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Robert A Lefebvre

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Cihangir Celik

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Douglas E. Peplow

Oak Ridge National Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge