Richard Denning
Ohio State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Richard Denning.
Reliability Engineering & System Safety | 2010
Benjamin Rutt; Kyle Metzroth; Aram Hakobyan; Tunc Aldemir; Richard Denning; Sean Dunagan; David Kunsman
Analysis of dynamic accident progression trees (ADAPT) is a mechanized procedure for the generation of accident progression event trees. Use of ADAPT substantially reduces the manual and computational effort for Level 2 probabilistic risk assessment (PRA) of nuclear power plants; reduces the likelihood of input errors; determines the order of events dynamically; and treats accidents in a phenomenology consistent manner. ADAPT is based on the concept of dynamic event trees which use explicit modeling of the deterministic dynamic processes that take place within the plant (through system simulation codes such as MELCOR, RELAP) for the modeling of stochastic system evolution. The computational infrastructure of ADAPT is presented, along with a prototype implementation of ADAPT using MELCOR for the PRA modeling of a station blackout in a pressurized water reactor. The computational infrastructure allows for flexibility in linking with different simulation codes, parallel processing of the scenarios under consideration, on-line scenario management (initiation as well as termination) and user-friendly graphical capabilities.
Reliability Engineering & System Safety | 2013
Diego Mandelli; Alper Yilmaz; Tunc Aldemir; Kyle Metzroth; Richard Denning
A challenging aspect of dynamic methodologies for probabilistic risk assessment (PRA), such as the Dynamic Event Tree (DET) methodology, is the large number of scenarios generated for a single initiating event. Such large amounts of information can be difficult to organize for extracting useful information. Furthermore, it is not often sufficient to merely calculate a quantitative value for the risk and its associated uncertainties. The development of risk insights that can increase system safety and improve system performance requires the interpretation of scenario evolutions and the principal characteristics of the events that contribute to the risk. For a given scenario dataset, it can be useful to identify the scenarios that have similar behaviors (i.e., identify the most evident classes), and decide for each event sequence, to which class it belongs (i.e., classification). It is shown how it is possible to accomplish these two objectives using the Mean-Shift Methodology (MSM). The MSM is a kernel-based, non-parametric density estimation technique that is used to find the modes of an unknown data distribution. The algorithm developed finds the modes of the data distribution in the state space corresponding to regions with highest data density as well as grouping the scenarios generated into clusters based on scenario temporal similarities. The MSM is illustrated using the data generated by a DET algorithm for the analysis of a simple level/temperature controller and reactor vessel auxiliary cooling system.
challenges of large applications in distributed environments | 2006
Benjamin Rutt; Aram Hakobyan; Kyle Metzroth; Tunc Aldemir; Richard Denning; Sean Dunagan; David Kunsman
Level 2 probabilistic risk assessments of nuclear plants (analysis of radionuclide release from containment) may require hundreds of runs of severe accident analysis codes such as MELCOR or RELAP/SCDAP to analyze possible sequences of events (scenarios) that may follow given initiating events. With the advances in computer architectures and ubiquitous networking, it is now possible to utilize multiple computing and storage resources for such computational experiments. This paper presents a system software infrastructure that supports execution and analysis of multiple dynamic event-tree simulations on distributed environments. The infrastructure allow for 1) the testing of event tree completeness, and, 2) the assessment and propagation of uncertainty on the plant state in the quantification of event trees
Reliability Engineering & System Safety | 2016
Dave Grabaskas; Marvin K. Nakayama; Richard Denning; Tunc Aldemir
Over the past two decades, U.S. nuclear power plant regulation has increasingly depended on best-estimate plus uncertainty safety analyses. As a result of the shift to best-estimate analyses, the distribution of the output metric must be compared against a regulatory goal, rather than a single, conservative value. This comparison has historically been conducted using a 95% one-sided confidence interval for the 0.95-quantile of the output distribution, which is usually found following the technique of simple random sampling using order statistics (SRS-OS). While SRS-OS has certain statistical advantages, there are drawbacks related to the available sampling schemes and the accuracy and precision of the resulting value. Recent work has shown that it is possible to establish asymptotically valid confidence intervals for a quantile of the output of a model simulated using variance reduction techniques (VRTs). These VRTs can provide more informative results than SRS-OS. This work compares SRS-OS and the VRTs of antithetic variates and Latin hypercube sampling through several experiments, designed to replicate conditions found in nuclear safety analyses. This work is designed as an initial investigation into the use of VRTs as a tool to satisfy nuclear regulatory requirements, with hope of expanded analyses of VRTs in the future.
Reliability Engineering & System Safety | 2018
Yi Xie; Jinsuo Zhang; Tunc Aldemir; Richard Denning
Abstract Although stainless steels (SSs) have excellent general corrosion resistance, they are nevertheless susceptible to pitting corrosion. The variation of pit depth and density is significant for the prediction of likelihood of corrosion damage occurring in service. Among the available pitting corrosion models, it is difficult to identify a specific model capable of characterizing all the pit formation processes observed and one that can be used for estimating the evolution of pit density distribution with time. A physics-based multi-state Markov model giving a full description of pitting corrosion states is presented. The transition rates used in the model are determined by fitting the model to experimental data. The variation of pit depth and density is simulated. The simulation is verified by experimental scenarios of SS exposed to chloride-containing environments.
Archive | 2012
Hans Ludewig; Dana Auburn Powers; John C. Hewson; Jeffrey L. LaChance; Art Wright; Jesse Phillips; R. Zeyen; B. Clement; Frank Garner; Leon Walters; Steve Wright; Larry J. Ott; Ahti Jorma Suo-Anttila; Richard Denning; Hiroyuki Ohshima; Shuji Ohno; S. Miyhara; Abdellatif M. Yacout; M. T. Farmer; D. Wade; C. Grandy; R. Schmidt; J. Cahalen; Tara Jean Olivier; Robert J. Budnitz; Yoshiharu Tobita; Frederic Serre; Ken Natesan; Juan J. Carbajo; Hae-Yong Jeong
Expert panels comprised of subject matter experts identified at the U.S. National Laboratories (SNL, ANL, INL, ORNL, LBL, and BNL), universities (University of Wisconsin and Ohio State University), international agencies (IRSN, CEA, JAEA, KAERI, and JRC-IE) and private consultation companies (Radiation Effects Consulting) were assembled to perform a gap analysis for sodium fast reactor licensing. Expert-opinion elicitation was performed to qualitatively assess the current state of sodium fast reactor technologies. Five independent gap analyses were performed resulting in the following topical reports: (1) Accident Initiators and Sequences (i.e., Initiators/Sequences Technology Gap Analysis), (2) Sodium Technology Phenomena (i.e., Advanced Burner Reactor Sodium Technology Gap Analysis), (3) Fuels and Materials (i.e., Sodium Fast Reactor Fuels and Materials: Research Needs), (4) Source Term Characterization (i.e., Advanced Sodium Fast Reactor Accident Source Terms: Research Needs), and (5) Computer Codes and Models (i.e., Sodium Fast Reactor Gaps Analysis of Computer Codes and Models for Accident Analysis and Reactor Safety). Volume II of the Sodium Research Plan consolidates the five gap analysis reports produced by each expert panel, wherein the importance of the identified phenomena and necessities of further experimental research and code development were addressed. The findings from these five reports comprised the basis for the analysis in Sodium Fast Reactor Research Plan Volume I.
Archive | 2008
David Kunsman; Tunc Aldemir; Benjamin Rutt; Kyle Metzroth; Richard Denning; Aram Hakobyan; Sean Dunagan
This LDRD project has produced a tool that makes probabilistic risk assessments (PRAs) of nuclear reactors - analyses which are very resource intensive - more efficient. PRAs of nuclear reactors are being increasingly relied on by the United States Nuclear Regulatory Commission (U.S.N.R.C.) for licensing decisions for current and advanced reactors. Yet, PRAs are produced much as they were 20 years ago. The work here applied a modern systems analysis technique to the accident progression analysis portion of the PRA; the technique was a system-independent multi-task computer driver routine. Initially, the objective of the work was to fuse the accident progression event tree (APET) portion of a PRA to the dynamic system doctor (DSD) created by Ohio State University. Instead, during the initial efforts, it was found that the DSD could be linked directly to a detailed accident progression phenomenological simulation code - the type on which APET construction and analysis relies, albeit indirectly - and thereby directly create and analyze the APET. The expanded DSD computational architecture and infrastructure that was created during this effort is called ADAPT (Analysis of Dynamic Accident Progression Trees). ADAPT is a system software infrastructure that supports execution and analysis of multiple dynamic event-tree simulations on distributed environments. A simulator abstraction layer was developed, and a generic driver was implemented for executing simulators on a distributed environment. As a demonstration of the use of the methodological tool, ADAPT was applied to quantify the likelihood of competing accident progression pathways occurring for a particular accident scenario in a particular reactor type using MELCOR, an integrated severe accident analysis code developed at Sandia. (ADAPT was intentionally created with flexibility, however, and is not limited to interacting with only one code. With minor coding changes to input files, ADAPT can be linked to other such codes.) The results of this demonstration indicate that the approach can significantly reduce the resources required for Level 2 PRAs. From the phenomenological viewpoint, ADAPT can also treat the associated epistemic and aleatory uncertainties. This methodology can also be used for analyses of other complex systems. Any complex system can be analyzed using ADAPT if the workings of that system can be displayed as an event tree, there is a computer code that simulates how those events could progress, and that simulator code has switches to turn on and off system events, phenomena, etc. Using and applying ADAPT to particular problems is not human independent. While the human resources for the creation and analysis of the accident progression are significantly decreased, knowledgeable analysts are still necessary for a given project to apply ADAPT successfully. This research and development effort has met its original goals and then exceeded them.
Volume 2: Fuel Cycle and High Level Waste Management; Computational Fluid Dynamics, Neutronics Methods and Coupled Codes; Student Paper Competition | 2008
Margaret Mkhosi; Richard Denning; Audeen W. Fentiman
The computational fluid dynamics code FLUENT has been used to analyze turbulent fluid flow over pebbles in a pebble bed modular reactor. The objective of the analysis is to evaluate the capability of the various RANS turbulence models to predict mean velocities, turbulent kinetic energy, and turbulence intensity inside the bed. The code was run using three RANS turbulence models: standard k-e, standard k-ω and the Reynolds stress turbulence models at turbulent Reynolds numbers, corresponding to normal operation of the reactor. For the k-e turbulence model, the analyses were performed at a range of Reynolds numbers between 1300 and 22 000 based on the approach velocity and the sphere diameter of 6 cm. Predictions of the mean velocities, turbulent kinetic energy, and turbulence intensity for the three models are compared at the Reynolds number of 5500 for all the RANS models analyzed. A unit-cell approach is used and the fluid flow domain consists of three unit cells. The packing of the pebbles is an orthorhombic arrangement consisting of seven layers of pebbles with the mean flow parallel to the z-axis. For each Reynolds number analyzed, the velocity is observed to accelerate to twice the inlet velocity within the pebble bed. From the velocity contours, it can be seen that the flow appears to have reached an asymptotic behavior by the end of the first unit cell. The velocity vectors for the standard k-e and the Reynolds stress model show similar patterns for the Reynolds number analyzed. For the standard k-ω, the vectors are different from the other two. Secondary flow structures are observed for the standard k-ω after the flow passes through the gap between spheres. This feature is not observable in the case of both the standard k-e and the RSM. Analysis of the turbulent kinetic energy contours shows that there is higher turbulence kinetic energy near the inlet than inside the bed. As the Reynolds number increases, kinetic energy inside the bed increases. The turbulent kinetic energy values obtained for the standard k-e and the RSM are similar, showing maximum turbulence kinetic energy of 7.5 m2 ·s−2 , whereas the standard k-ω shows a maximum of about 20 m2 ·s−2 . Another observation is that the turbulence intensity is spread throughout the flow domain for the k-e and RSM whereas for the k-ω, the intensity is concentrated at the front of the second sphere. Preliminary analysis performed for the pressure drop using the standard k-e model for various velocities show that the dependence of pressure on velocity varies as V1.76 .Copyright
Archive | 2015
Tunc Aldemir; Richard Denning; Stephen Unwin
Reduction in safety margin can be expected as passive structures and components that cannot be readily replaced undergo degradation with time. Limitations in the traditional probabilistic risk assessment (PRA) methodology constrain its value as an effective tool to address the impact of aging effects on risk and for quantifying the effectiveness of aging management strategies in maintaining safety margins. The objective of this project is to develop methodology to address multiple aging mechanisms involving large numbers of components (with possibly statistically dependent failures) in a computationally feasible manner, where the sequencing of events is conditioned on the physical conditions predicted in a simulation environment, such as being developed under the New Generation System Code program (also known as R-7). A methodology will be developed for the identification of risk-significant passive component failure modes. Using surrogates, this methodology will need to take account of the extent to which the affected structures and components are currently modeled in the PRA, the importance of components and systems (using Risk Achievement Worth, as well as other importance measures) for the current plant state, anticipated rates of degradation, and the effectiveness of surveillance in identifying degradation. Where the component failure has no identifiable surrogate, it will be added to the model for the purposes of the importance analysis using dynamic PRA techniques, such as the ADAPT methodology which uses DETs coupled to a system code (e.g. R-7) to model possible differences in system evolution due to uncertainties. Both epistemic and aleatory uncertainties can be accounted for within the same phenomenological framework. The DETs are similar to conventional event trees expect that the order of the events is not prespecified by the user but rather determined through the system code. Maintenance can be accounted for in a coherent fashion. Hybrid component reliability models that have physics-based prior structures but allow the use of service data (flaw identification, leaks, ruptures) will be developed for the estimation of the model parameters in a Bayesian framework. Such a hybrid would be able to take advantage of both physics models/data and operational service data. The framework developed will accommodate the prospective impacts of various intervention strategies such as testing, maintenance, and refurbishment. A methodology will be developed for the incorporation of component aging models into the R-7 environment. This integration will be structured to conform to the epistemic/aleatory sampling structure of R-7 which would partly include the ADAPT approach to the quantification of some of the modeling uncertainties.
Nuclear Technology | 2014
Acacia Brunett; Richard Denning; Tunc Aldemir
Abstract The risk-dominant containment failure modes of a pressurized water reactor are reassessed using the current state of knowledge for the phenomena that contribute to these failure modes. Our review concludes that some mechanisms that were considered as having the potential to result in containment failure at the time of NUREG-1150, such as in-vessel steam explosions and vessel launch (i.e., the alpha-mode containment failure), have subsequently undergone sufficient review and can be excluded from further consideration. For other phenomena, such as high-pressure melt ejection (HPME) and combustible gas explosions, our review concludes that substantial uncertainties still exist with regard to modeling in system level codes; for combustion events, careful consideration is still required when making severe accident management decisions. With regard to HPME, sensitivity studies have been performed with the MELCOR computer code to address the effects of modeling uncertainties on containment loading. Sensitivity studies using MELCOR have also been performed with regard to combustion events to examine gas generation, the effect of containment cooling on the potential for deflagrations, and the combustion load on containment. Combustion loads are compared to the NUREG-1150 containment fragility curve to assess the likelihood of containment failure. Our MELCOR analyses agree with the NUREG-1150 assumption that insufficient hydrogen is generated in-vessel to result in containment failure. Sensitivity studies regarding the rate and timing of reflooding a degraded core do not indicate a significant effect on hydrogen production in-vessel or a significant challenge to containment integrity regarding HPME. However, it is observed that recovery actions resulting in cooling of the containment atmosphere could result in deinerting the containment and lead to a sufficiently energetic combustion event that can fail the containment.