Dana Kelly
Idaho National Laboratory
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Dana Kelly.
Reliability Engineering & System Safety | 1998
Nathan Siu; Dana Kelly
Abstract Bayesian statistical methods are widely used in probabilistic risk assessment (PRA) because of their ability to provide useful estimates of model parameters when data are sparse and because the subjective probability framework, from which these methods are derived, is a natural framework to address the decision problems motivating PRA. This paper presents a tutorial on Bayesian parameter estimation especially relevant to PRA. It summarizes the philosophy behind these methods, approaches for constructing likelihood functions and prior distributions, some simple but realistic examples, and a variety of cautions and lessons regarding practical applications. References are also provided for more in-depth coverage of various topics.
Reliability Engineering & System Safety | 2009
Dana Kelly; Curtis Smith
Markov chain Monte Carlo (MCMC) approaches to sampling directly from the joint posterior distribution of aleatory model parameters have led to tremendous advances in Bayesian inference capability in a wide variety of fields, including probabilistic risk analysis. The advent of freely available software coupled with inexpensive computing power has catalyzed this advance. This paper examines where the risk assessment community is with respect to implementing modern computational-based Bayesian approaches to inference. Through a series of examples in different topical areas, it introduces salient concepts and illustrates the practical application of Bayesian inference via MCMC sampling to a variety of important problems.
Reliability Engineering & System Safety | 2014
Matthias C. M. Troffaes; Gm Gero Walter; Dana Kelly
Abstract In a standard Bayesian approach to the alpha-factor model for common-cause failure, a precise Dirichlet prior distribution models epistemic uncertainty in the alpha-factors. This Dirichlet prior is then updated with observed data to obtain a posterior distribution, which forms the basis for further inferences. In this paper, we adapt the imprecise Dirichlet model of Walley to represent epistemic uncertainty in the alpha-factors. In this approach, epistemic uncertainty is expressed more cautiously via lower and upper expectations for each alpha-factor, along with a learning parameter which determines how quickly the model learns from observed data. For this application, we focus on elicitation of the learning parameter, and find that values in the range of 1 to 10 seem reasonable. The approach is compared with Kelly and Atwoods minimally informative Dirichlet prior for the alpha-factor model, which incorporated precise mean values for the alpha-factors, but which was otherwise quite diffuse. Next, we explore the use of a set of Gamma priors to model epistemic uncertainty in the marginal failure rate, expressed via a lower and upper expectation for this rate, again along with a learning parameter. As zero counts are generally less of an issue here, we find that the choice of this learning parameter is less crucial. Finally, we demonstrate how both epistemic uncertainty models can be combined to arrive at lower and upper expectations for all common-cause failure rates. Thereby, we effectively provide a full sensitivity analysis of common-cause failure rates, properly reflecting epistemic uncertainty of the analyst on all levels of the common-cause failure model.
Proceedings of the Institution of Mechanical Engineers, Part O: Journal of Risk and Reliability | 2008
Dm Rasmuson; Dana Kelly
This paper reviews the basic concepts of modelling common-cause failures (CCFs) in reliability and risk studies and then applies these concepts to the treatment of CCF in event assessment. The cases of a failed component (with and without shared CCF potential) and a component being unavailable due to preventive maintenance or testing are addressed. The treatment of two related failure modes (e.g. failure to start and failure to run) is a new feature of this paper, as is the treatment of asymmetry within a common-cause component group.
Reliability Engineering & System Safety | 2011
Dana Kelly; Corwin L. Atwood
Abstract In a Bayesian framework, the Dirichlet distribution is the conjugate distribution to the multinomial likelihood function, and so the analyst is required to develop a Dirichlet prior that incorporates available information. However, as it is a multiparameter distribution, choosing the Dirichlet parameters is less straightforward than choosing a prior distribution for a single parameter, such as p in the binomial distribution. In particular, one may wish to incorporate limited information into the prior, resulting in a minimally informative prior distribution that is responsive to updates with sparse data. In the case of binomial p or Poisson λ , the principle of maximum entropy can be employed to obtain a so-called constrained noninformative prior. However, even in the case of p, such a distribution cannot be written down in the form of a standard distribution (e.g., beta, gamma), and so a beta distribution is used as an approximation in the case of p. In the case of the multinomial model with parametric constraints, the approach of maximum entropy does not appear tractable. This paper presents an alternative approach, based on constrained minimization of a least-squares objective function, which leads to a minimally informative Dirichlet prior distribution. The alpha-factor model for common-cause failure, which is widely used in the United States, is the motivation for this approach, and is used to illustrate the method. In this approach to modeling common-cause failure, the alpha-factors, which are the parameters in the underlying multinomial model for common-cause failure, must be estimated from data that are often quite sparse, because common-cause failures tend to be rare, especially failures of more than two or three components, and so a prior distribution that is responsive to updates with sparse data is needed.
Reliability Engineering & System Safety | 2009
Corwin L. Atwood; Dana Kelly
The binomial failure rate (BFR) common-cause model was introduced in the 1970s, but has not been used much recently. It turns out to be very easy to use with WinBUGS, a free, widely used Markov chain Monte Carlo (MCMC) program for Bayesian estimation. This fact recommends it in situations when failure data are available, especially when few failures have been observed. This article explains how to use it both for standby equipment that may fail to operate when demanded and for running equipment that may fail at random times. Example analyses are given and discussed.
2007 ASME International Mechanical Engineering Congress and Exposition,Seattle, Washington,11/11/2007,11/15/2007 | 2007
Dana Kelly
Typical engineering systems in applications with high failure consequences such as nuclear reactor plants often employ redundancy and diversity of equipment in an effort to lower the probability of failure and therefore risk. However, it has long been recognized that dependencies exist in these redundant and diverse systems. Some dependencies, such as common sources of electrical power, are typically captured in the logic structure of the risk model. Others, usually referred to as intercomponent dependencies, are treated implicitly by introducing one or more statistical parameters into the model. Such common-cause failure models have limitations in a simulation environment. In addition, substantial subjectivity is associated with parameter estimation for these models. This paper describes an approach in which system performance is simulated by drawing samples from the joint distributions of dependent variables. The approach relies on the notion of a copula distribution, a notion which has been employed by the actuarial community for ten years or more, but which has seen only limited application in technological risk assessment. The paper also illustrates how equipment failure data can be used in a Bayesian framework to estimate the parameter values in the copula model. This approach avoids much of the subjectivity required to estimate parameters in traditional common-cause failure models. Simulation examples are presented for failures in time. The open-source software package R is used to perform the simulations. The open-source software package WinBUGS is used to perform the Bayesian inference via Markov chain Monte Carlo sampling.
Archive | 2011
Dana Kelly; Curtis Smith
Sometimes a parameter in an aleatory model, such as p in the binomial distribution or λ in the Poisson distribution, can be affected by observable quantities such as pressure, mass, or temperature. For example, in the case of a pressure vessel, very high pressure and high temperature may be leading indicators of failures. In such cases, information about the explanatory variables can be used in the Bayesian inference paradigm to inform the estimates of p or λ. We have already seen examples of this in Chap. 5, where we modeled the influence of time on p and λ via logistic and loglinear regression models, respectively. In this chapter, we extend this concept to more complex situations, such as a Bayesian regression approach that estimates the probability of O-ring failure in the solid-rocket booster motors of the space shuttle.
Reliability Engineering & System Safety | 2010
Curtis Smith; Dana Kelly; Homayoon Dezfuli
Bayesian inference techniques play a central role in modern risk and reliability evaluations of complex engineering systems. These techniques allow the system performance data and any relevant associated information to be used collectively to calculate the probabilities of various types of hypotheses that are formulated as part of reliability assurance activities. This paper proposes a methodology based on Bayesian hypothesis testing to determine the number of tests that would be required to demonstrate that a system-level reliability target is met with a specified probability level. Recognizing that full-scale testing of a complex system is often not practical, testing schemes are developed at the subsystem level to achieve the overall system reliability target. The approach uses network modeling techniques to transform the topology of the system into logic structures consisting of series and parallel subsystems. The paper addresses the consideration of cost in devising subsystem level test schemes. The developed techniques are demonstrated using several examples. All analyses are carried out using the Bayesian analysis tool WinBUGS, which uses Markov chain Monte Carlo simulation methods to carry out inference over the network.
Reliability Engineering & System Safety | 1992
Dana Kelly
Abstract This note presents a calculation and discussion of the probability of operator failure to inject boron with the standby liquid control (SLC) system during an anticipated transient without scram (ATWS) at a boiling water reactor (BWR). Calculated results are compared to the value used in some past risk assessments, as well as with the estimated SLC hardware unavailability, to determine if the assertion that the SLC hardware unavailability dominates the human error probability is reasonable. The sensitivity of this calculation to the value used for the suppression pool heat capacity temperature limit is also examined.