Richard R. Picard
Los Alamos National Laboratory
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Richard R. Picard.
Journal of the American Statistical Association | 1984
Richard R. Picard; R. Dennis Cook
Abstract A methodolgy for assessment of the predictive ability of regression models is presented. Attention is given to models obtained via subset selection procedures, which are extremely difficult to evaluate by standard techniques. Cross-validatory assessments of predictive ability are obtained and their use illustrated in examples.
Journal of Quality Technology | 1991
Kenneth N. Berk; Richard R. Picard
Experimental designs used in industry often allow no degrees of freedom for the estimation of error. Nevertheless, analysis of variance results can (if used properly) be used to determine which factors are significant. We give a back-of-the-envelope cal..
IEEE Engineering in Medicine and Biology Magazine | 2004
D. W. Forslund; Edward L. Joyce; Tom Burr; Richard R. Picard; Doug Wokoun; Edith Umland; Judith Brillman; Philip Froman; Fred Koster
Clearly, there is a need for data at multiple levels and many locations. Investigations of possible cases and outbreaks must occur locally, and local data must be available as generated to healthcare providers and emergency responders. At the same time, regional, national and international authorities need aggregated data to understand the scope of an outbreak and to assist in the response. Thus, in comparison to a system in which data is sent to a central facility for aggregation and redistributed to local areas, we argue that a distributed system is much more appropriate and resilient to a bioterrorism event. The distributed data system can provide information to local responders for their immediate action and reduce demand on a central system and data unavailability over wide area networks while providing raw data immediately to centralized reviewers.
Reliability Engineering & System Safety | 1998
Harry F. Martz; Richard R. Picard
Abstract A procedure is presented for quantifying the consistency between probabilistic risk assessment (PRA) results and corresponding plant-specific operating data not considered in the PRA. The method, which is easily implemented in practice, is based on the use of Bayes p -values for the predictive probability that the observed data would have been produced from the PRA results in conjunction with an assumed binomial or Poisson sampling distribution. Uncertainties in both the PRA results and the operating data are considered. The method is used to quantify the consistency between PRA results and operating data for high-pressure coolant injection system unreliabilities at 11 US commercial boiling water reactors.
Reliability Engineering & System Safety | 1995
Harry F. Martz; Richard R. Picard
In quantifying a plant-specific Poisson event occurrence rate λ in Probabilistic Risk Assessment, it is sometimes the case that either the corresponding Poisson exposure time t or the observed number of events x (or both) are uncertain. We present several methods which account for uncertainties in both x and t when using Bayesian methods to estimate λ . A gamma prior distribution on λ is considered. While the methods formally require numerical integration, a computationally convenient approximation is provided to implement them in practice. A numerical example concerning the rate of failure to operate of the high pressure coolant injection system of commercial boiling water reactors is used to illustrate the methods.
Journal of the American Statistical Association | 1992
Richard R. Picard; Maurice C. Bryson
Abstract To improve verification of the Threshold Test Ban Treaty, the United States and Russia have embarked on an effort to make on-site yield measurements of each others nuclear tests. Beyond their direct use in verification, these measurements also may prove useful in calibrating a monitoring system based on seismic magnitudes. The relative merits of seismic monitoring vis-a-vis on-site measurement have been at the core of a long-standing controversy. Many seismic verification problems hinge on statistical issues, including linear calibration based on a small data set and the formal use of expert opinion.
Technometrics | 2005
Richard R. Picard
Transient Markov chains are sometimes simulated to estimate rare event probabilities. For illustration, chains defined by an airborne particle dispersion model are used to estimate the probabilities that released particles reach various locations. Such estimated probabilities are needed for many purposes, including exposure calculations for affected populations and optimization of detector placement. By using experimental designs for simulation runs and embedding fitted regression models of output data in importance sampling transition kernels, convergence is improved by factors of tens to hundreds.
Technometrics | 2013
Richard R. Picard; Tom Burr; Michael S. Hamada
Importance sampling aids in establishing alarm thresholds for instrumentation that is used worldwide to deter/detect nuclear threats. We review the statistical aspects of threshold determination, discuss the intuition behind the methodology, and show when simple techniques work well and when they do not. Computational efficiencies relative to ordinary simulation are improved by factors of tens to hundreds in many cases, and the approach is easily implemented by nonexperts. Supplementary materials (R codes) are available online.
Archive | 2018
Richard R. Picard; Anthony J. Zukaitis; R.A. Forster
We address several basic issues, primarily whether two sets of MCNP tallies are stochastically equivalent (i.e., that the sets of particle history scores follow the same statistical distribution) and, if different, quantify the significance level of the difference. The underlying topics here are of longstanding interest to the statistical community, and no “new” theoretical work needs to be carried out to address them. Related comparisons are useful to code developers in assessing code modifications and to code users in assessing other simulations and related physical measurements. However, these statistical methods must be applied with care to avoid obtaining misleading results, as we show in several examples. The text is written for MCNP practitioners.
Journal of the American Statistical Association | 2005
Richard R. Picard
Statisticians often encounter rare event problems, ranging from simple ones (e.g., estimation of tail area probabilities for sample means) to those more complex (e.g., estimation of failure rates for highly reliable complex systems). Typically, such problems defy analytical solution, and computer simulation must be used. The brute force approach to rare event simulation, which generates millions of ordinary events to produce an adequate number of “rare” ones, is obviously inefficient, and other methods must be implemented. Bucklew addresses such methods from the perspective of large deviation theory, with emphasis on theoretical foundations. As for the book itself, let me start with the good news. I really enjoyed reading it. It has something for everyone, including a concise treatment of various introductory subjects such as random-number generation, stochastic models, and large deviation theory. Many of the basic importance sampling tricks, such as mean translation, variance scaling, and exponential twists, are nicely illustrated. As part of a balanced presentation, shortcomings of blindly applying a large deviations approach (e.g., Glasserman and Wang 1997; Sadowski and Bucklew 1990) are noted to aid in providing intuition. Pseudocode for programming random-number generation algorithms is even presented. After some obligatory preliminaries, the presentation begins not with a discussion of rare event simulation, but rather with an overview of large deviation theory. Cramér’s theorem for sums Sn of n independent and identically distributed random variables having mean m, P(|Sn/n − m| ≥ r) = f (n) exp(−nI (r)), is reviewed, and the nature of the exponential rate constant I (r) is explored. Other results of more recent origin follow and lead smoothly into the book’s notion of an efficient importance distribution in Chapter 5 and to later examples. Now for the not-so-good news. The Preface laments the current state of rare event simulation: “Unfortunately, this area has a reputation among simulation practitioners of requiring a great deal of technical and probabilistic expertise.” I fear that this book will only perpetuate that reputation. Many of the practitioners I know are scientists/engineers who, despite being quite knowledgeable in their fields, have at most a master’s level understanding of statistics (and usually less). They would certainly find the book’s measure-theoretic flavor imposing. For example, on page 53, “a good rate function is a lower semi-continuous mapping I :E → [0,∞) such that for all α ∈ [0,∞) the level set {x ∈ E : I (x) ≤ α} is compact.” And although the book has something for everyone, I wonder if it has enough for anyone. Toward that end, I note the following: