Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christopher J. Gesh is active.

Publication


Featured researches published by Christopher J. Gesh.


Archive | 2011

Compendium of Material Composition Data for Radiation Transport Modeling

Ronald J. McConn; Christopher J. Gesh; Richard T. Pagh; Robert A. Rucker; Robert Williams Iii

Computational modeling of radiation transport problems including homeland security, radiation shielding and protection, and criticality safety all depend upon material definitions. This document has been created to serve two purposes: 1) to provide a quick reference of material compositions for analysts and 2) a standardized reference to reduce the differences between results from two independent analysts. Analysts are always encountering a variety of materials for which elemental definitions are not readily available or densities are not defined. This document provides a location where unique or hard to define materials will be located to reduce duplication in research for modeling purposes. Additionally, having a common set of material definitions helps to standardize modeling across PNNL and provide two separate researchers the ability to compare different modeling results from a common materials basis.


IEEE Transactions on Nuclear Science | 2008

Coupling Deterministic and Monte Carlo Transport Methods for the Simulation of Gamma-Ray Spectroscopy Scenarios

Leon E. Smith; Christopher J. Gesh; Richard T. Pagh; Erin A. Miller; Mark W. Shaver; Eric D. Ashbaker; Michael T. Batdorf; J. E. Ellis; William R. Kaye; Ronald J. McConn; George H. Meriwether; Jennifer Jo Ressler; Andrei B. Valsan; Todd A. Wareing

Simulation is often used to predict the response of gamma-ray spectrometers in technology viability and comparative studies for homeland and national security scenarios. Candidate radiation transport methods generally fall into one of two broad categories: stochastic (Monte Carlo) and deterministic. Monte Carlo methods are the most heavily used in the detection community and are particularly effective for calculating pulse-height spectra in instruments. However, computational times for scattering- and attenuation-dominated problems can be extremely long - many hours or more on a typical desktop computer. Deterministic codes that discretize the transport in space, angle, and energy offer potential advantages in computational efficiency for these same kinds of problems, but pulse-height calculations are not readily accessible. This paper investigates a method for coupling angular flux data produced by a three-dimensional deterministic code to a Monte Carlo model of a gamma-ray spectrometer. Techniques used to mitigate ray effects, a potential source of inaccuracy in deterministic field calculations, are described. Strengths and limitations of the coupled methods, as compared to purely Monte Carlo simulations, are highlighted using example gamma-ray detection problems and two metrics: (1) accuracy when compared to empirical data and (2) computational time on a typical desktop computer.


ieee nuclear science symposium | 2006

Deterministic Transport Methods for the Simulation of Gamma-Ray Spectroscopy Scenarios

L. Eric Smith; Christopher J. Gesh; Richard T. Pagh; Ronald J. McConn; J. Edward Ellis; William R. Kaye; George H. Meriwether; Erin A. Miller; Mark W. Shaver; Jason R. Starner; Andrei B. Valsan; Todd A. Wareing

Radiation transport modeling methods used in the radiation detection community fall into one of two broad categories: stochastic (Monte Carlo) and deterministic. Monte Carlo methods are typically the tool of choice for simulating gamma-ray spectrometers operating in homeland and national security settings (e.g. portal monitoring of vehicles or isotope identification using handheld devices), but deterministic codes that discretize the linear Boltzmann transport equation in space, angle, and energy offer potential advantages in computational efficiency for many complex radiation detection problems. This paper describes the development of deterministic algorithms for simulating gamma-ray spectroscopy scenarios. Key challenges include: formulating methods to automatically define an energy group structure that can support modeling of gamma-ray spectrometers ranging from low to high resolution; combining deterministic transport algorithms (e.g. ray-tracing and discrete ordinates) to mitigate ray effects for a wide range of problem types; and developing efficient and accurate methods to calculate gamma-ray spectrometer response functions from the deterministic angular flux solutions. In this paper, the software framework aimed at addressing these challenges is described and results from test problems that compare deterministic and Monte Carlo approaches are provided.


Nuclear Technology | 2011

Bayesian Radiation Source Localization

Kenneth D. Jarman; Erin A. Miller; Richard S. Wittman; Christopher J. Gesh

Abstract Locating illicit radiological sources using gamma-ray or neutron detection is a key challenge for both homeland security and nuclear nonproliferation. Localization methods using an array of detectors or a sequence of observations in time and space must provide rapid results while accounting for a dynamic attenuating environment. In the presence of significant attenuation and scatter, more extensive numerical transport calculations in place of the standard analytical approximations may be required to achieve accurate results. Numerical adjoints based on deterministic transport codes provide relatively efficient detector response calculations needed to determine the most likely location of a true source given a set of observed count rates. Probabilistic representations account for uncertainty in the source location resulting from uncertainties in detector responses and the potential for nonunique solutions. A Bayesian approach improves on previous likelihood methods for source localization by allowing the incorporation of all available information to help constrain solutions. We present an approach to localizing radiological sources that uses numerical adjoints and a Bayesian formulation and demonstrate the approach on two simple example scenarios. Results indicate accurate estimates of source locations. We briefly study the effect of neglecting the contribution of all scattered radiation in the adjoints, as analytical transport approximations do, for a case with moderately attenuating material between detectors and sources. The source location accuracy of the uncollided-only solutions appears to be significantly worse at the source strength considered here, suggesting that the higher physical fidelity that is provided by full numerical adjoint-based solutions may provide an advantage in operational settings.


Reliability Engineering & System Safety | 2006

Estimation procedures and error analysis for inferring the total plutonium (Pu) produced by a graphite-moderated reactor

Patrick G. Heasler; Tom Burr; Bruce D. Reid; Christopher J. Gesh; Charles K. Bayne

Abstract Graphite isotope ratio method (GIRM) is a technique that uses measurements and computer models to estimate total plutonium (Pu) production in a graphite-moderated reactor. First, isotopic ratios of trace elements in extracted graphite samples from the target reactor are measured. Then, computer models of the reactor relate those ratios to Pu production. Because Pu is controlled under non-proliferation agreements, an estimate of total Pu production is often required, and a declaration of total Pu might need to be verified through GIRM. In some cases, reactor information (such as core dimensions, coolant details, and operating history) are so well documented that computer models can predict total Pu production without the need for measurements. However, in most cases, reactor information is imperfectly known, so a measurement and model-based method such as GIRM is essential. Here, we focus on GIRMs estimation procedure and its associated uncertainty. We illustrate a simulation strategy for a specific reactor that estimates GIRMs uncertainty and determines which inputs contribute most to GIRMs uncertainty, including inputs to the computer models. These models include a “local” code that relates isotopic ratios to the local Pu production, and a “global” code that predicts the Pu production shape over the entire reactor. This predicted shape is included with other 3D basis functions to provide a “hybrid basis set” that is used to fit the local Pu production estimates. The fitted shape can then be integrated over the entire reactor to estimate total Pu production. This GIRM evaluation provides a good example of several techniques of uncertainty analysis and introduces new reasons to fit a function using basis functions in the evaluation of the impact of uncertainty in the true 3D shape.


Archive | 2011

Lead Slowing-Down Spectrometry Time Spectral Analysis for Spent Fuel Assay: FY11 Status Report

Jonathan A. Kulisek; Kevin K. Anderson; Sonya M. Bowyer; Andrew M. Casella; Christopher J. Gesh; Glen A. Warren

Developing a method for the accurate, direct, and independent assay of the fissile isotopes in bulk materials (such as used fuel) from next-generation domestic nuclear fuel cycles is a goal of the Office of Nuclear Energy, Fuel Cycle R&D, Material Protection and Control Technology (MPACT) Campaign. To meet this goal, MPACT supports a multi-institutional collaboration, of which PNNL is a part, to study the feasibility of Lead Slowing Down Spectroscopy (LSDS). This technique is an active nondestructive assay method that has the potential to provide independent, direct measurement of Pu and U isotopic masses in used fuel with an uncertainty considerably lower than the approximately 10% typical of todays confirmatory assay methods. This document is a progress report for FY2011 PNNL analysis and algorithm development. Progress made by PNNL in FY2011 continues to indicate the promise of LSDS analysis and algorithms applied to used fuel. PNNL developed an empirical model based on calibration of the LSDS to responses generated from well-characterized used fuel. The empirical model, which accounts for self-shielding effects using empirical basis vectors calculated from the singular value decomposition (SVD) of a matrix containing the true self-shielding functions of the used fuel assembly models. The potential for the direct and independent assay of the sum of the masses of 239Pu and 241Pu to within approximately 3% over a wide used fuel parameter space was demonstrated. Also, in FY2011, PNNL continued to develop an analytical model. Such efforts included the addition of six more non-fissile absorbers in the analytical shielding function and the non-uniformity of the neutron flux across the LSDS assay chamber. A hybrid analytical-empirical approach was developed to determine the mass of total Pu (sum of the masses of 239Pu, 240Pu, and 241Pu), which is an important quantity in safeguards. Results using this hybrid method were of approximately the same accuracy as the pure empirical approach. In addition, total Pu with much better accuracy with the hybrid approach than the pure analytical approach. In FY2012, PNNL will continue efforts to optimize its empirical model and minimize its reliance on calibration data. In addition, PNNL will continue to develop an analytical model, considering effects such as neutron-scattering in the fuel and cladding, as well as neutrons streaming through gaps between fuel pins in the fuel assembly.


IEEE Transactions on Nuclear Science | 2013

Assaying Used Nuclear Fuel Assemblies Using Lead Slowing-Down Spectroscopy and Singular Value Decomposition

Jonathan A. Kulisek; Kevin K. Anderson; Andrew M. Casella; Christopher J. Gesh; Glen A. Warren

This study investigates the use of a Lead Slowing-Down Spectrometer (LSDS) for the direct and independent measurement of fissile isotopes in light-water nuclear reactor fuel assemblies. The current study applies MCNPX, a Monte Carlo radiation transport code, to simulate the measurement of the assay of the used nuclear fuel assemblies in the LSDS. An empirical model has been developed based on the calibration of the LSDS to responses generated from the simulated assay of six well-characterized fuel assemblies. The effects of self-shielding are taken into account by using empirical basis vectors calculated from the singular value decomposition (SVD) of a matrix containing the self-shielding functions from the assay of assemblies in the calibration set. The performance of the empirical algorithm was tested on version 1 of the Next-Generation Safeguards Initiative (NGSI) used fuel library consisting of 64 assemblies, as well as a set of 27 diversion assemblies, both of which were developed by Los Alamos National Laboratory. The potential for direct and independent assay of the sum of the masses of Pu-239 and Pu-241 to within 2%, on average, has been demonstrated.


Archive | 2007

Determination of Light Water Reactor Fuel Burnup with the Isotope Ratio Method

David C. Gerlach; Mark R. Mitchell; Bruce D. Reid; Christopher J. Gesh; David E. Hurley

For the current project to demonstrate that isotope ratio measurements can be extended to zirconium alloys used in LWR fuel assemblies we report new analyses on irradiated samples obtained from a reactor. Zirconium alloys are used for structural elements of fuel assemblies and for the fuel element cladding. This report covers new measurements done on irradiated and unirradiated zirconium alloys, Unirradiated zircaloy samples serve as reference samples and indicate starting values or natural values for the Ti isotope ratio measured. New measurements of irradiated samples include results for 3 samples provided by AREVA. New results indicate: 1. Titanium isotope ratios were measured again in unirradiated samples to obtain reference or starting values at the same time irradiated samples were analyzed. In particular, 49Ti/48Ti ratios were indistinguishably close to values determined several months earlier and to expected natural values. 2. 49Ti/48Ti ratios were measured in 3 irradiated samples thus far, and demonstrate marked departures from natural or initial ratios, well beyond analytical uncertainty, and the ratios vary with reported fluence values. The irradiated samples appear to have significant surface contamination or radiation damage which required more time for SIMS analyses. 3. Other activated impurity elements still limit the sample size for SIMS analysis of irradiated samples. The sub-samples chosen for SIMS analysis, although smaller than optimal, were still analyzed successfully without violating the conditions of the applicable Radiological Work Permit


Archive | 2014

Estimation of 240Pu Mass in a Waste Tank Using Ultra-Sensitive Detection of Radioactive Xenon Isotopes from Spontaneous Fission

Ted W. Bowyer; Christopher J. Gesh; Derek A. Haas; James C. Hayes; Lenna A. Mahoney; Joseph E. Meacham; Donaldo P. Mendoza; Khris B. Olsen; Amanda M. Prinke; Bruce D. Reid; Vincent T. Woods

We report on a technique to detect and quantify the amount of 240Pu in a large tank used to store nuclear waste from plutonium production at the Hanford nuclear site. While the contents of this waste tank are known from previous grab sample measurements, our technique could allow for determination of the amount of 240Pu in the tank without costly sample retrieval and analysis of this highly radioactive material. This technique makes an assumption, which was confirmed, that 240Pu dominates the spontaneous fissions occurring in the tank.


Archive | 2011

RADSAT Benchmarks for Prompt Gamma Neutron Activation Analysis Measurements

Kimberly A. Burns; Christopher J. Gesh

The accurate and efficient simulation of coupled neutron-photon problems is necessary for several important radiation detection applications. Examples include the detection of nuclear threats concealed in cargo containers and prompt gamma neutron activation analysis for nondestructive determination of elemental composition of unknown samples. High-resolution gamma-ray spectrometers are used in these applications to measure the spectrum of the emitted photon flux, which consists of both continuum and characteristic gamma rays with discrete energies. Monte Carlo transport is the most commonly used simulation tool for this type of problem, but computational times can be prohibitively long. This work explores the use of multi-group deterministic methods for the simulation of coupled neutron-photon problems. The main purpose of this work is to benchmark several problems modeled with RADSAT and MCNP to experimental data. Additionally, the cross section libraries for RADSAT are updated to include ENDF/B-VII cross sections. Preliminary findings show promising results when compared to MCNP and experimental data, but also areas where additional inquiry and testing are needed. The potential benefits and shortcomings of the multi-group-based approach are discussed in terms of accuracy and computational efficiency.

Collaboration


Dive into the Christopher J. Gesh's collaboration.

Top Co-Authors

Avatar

Bruce D. Reid

Pacific Northwest National Laboratory

View shared research outputs
Top Co-Authors

Avatar

David C. Gerlach

Pacific Northwest National Laboratory

View shared research outputs
Top Co-Authors

Avatar

David E. Hurley

Pacific Northwest National Laboratory

View shared research outputs
Top Co-Authors

Avatar

George H. Meriwether

Pacific Northwest National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Richard T. Pagh

Pacific Northwest National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Andrew M. Casella

Pacific Northwest National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Kevin K. Anderson

Pacific Northwest National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Leon E. Smith

Pacific Northwest National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Erin A. Miller

Pacific Northwest National Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge