Harry F. Martz
Los Alamos National Laboratory
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Harry F. Martz.
The American Statistician | 2001
Michael S. Hamada; Harry F. Martz; C. S Reese; Alyson G. Wilson
This article shows how a genetic algorithm can be used to find near-optimal Bayesia nexperimental designs for regression models. The design criterion considered is the expected Shannon information gain of the posterior distribution obtained from performing a given experiment compared with the prior distribution. Genetic algorithms are described and then applied to experimental design. The methodology is then illustrated with a wide range of examples: linear and nonlinear regression, single and multiple factors, and normal and Bernoulli distributed experimental data.
Technometrics | 1988
Harry F. Martz; R. A. Waller; E. T. Fickas
A Bayesian procedure is presented for estimating the reliability of a series system of independent binomial subsystems and components. The method considers either test or prior data (perhaps both or neither) at the system, subsystem, and component level. Beta prior distributions are assumed throughout. Inconsistent prior judgments are averaged within the simple-to-use procedure. The method is motivated by the following practical problem. It is required to estimate the overall reliability of a certain air-to-air heat-seeking missile system containing five major subsystems with up to nine components per subsystem. The posterior distribution of the overall missile-system reliability from which the required estimates are obtained is computed.
Reliability Engineering & System Safety | 2004
Michael S. Hamada; Harry F. Martz; C.S. Reese; Todd L. Graves; V. Johnson; Alyson G. Wilson
Abstract This paper presents a fully Bayesian approach that simultaneously combines non-overlapping (in time) basic event and higher-level event failure data in fault tree quantification. Such higher-level data often correspond to train, subsystem or system failure events. The fully Bayesian approach also automatically propagates the highest-level data to lower levels in the fault tree. A simple example illustrates our approach. The optimal allocation of resources for collecting additional data from a choice of different level events is also presented. The optimization is achieved using a genetic algorithm.
Technometrics | 2004
C. Shane Reese; Alyson G. Wilson; Michael S. Hamada; Harry F. Martz; Kenneth J. Ryan
Scientific investigations frequently involve data from computer experiment(s) as well as related physical experimental data on the same factors and related response variable(s). There may also be one or more expert opinions regarding the response of interest. Traditional statistical approaches consider each of these datasets separately with corresponding separate analyses and fitted statistical models. A compelling argument can be made that better, more precise statistical models can be obtained if the combined data are analyzed simultaneously using a hierarchical Bayesian integrated modeling approach. However, such an integrated approach must recognize important differences, such as possible biases, in these experiments and expert opinions. We illustrate our proposed integrated methodology by using it to model the thermodynamic operation point of a top-spray fluidized bed microencapsulation processing unit. Such units are used in the food industry to tune the effect of functional ingredients and additives. An important thermodynamic response variable of interest, Y, is the steady-state outlet air temperature. In addition to a set of physical experimental observations involving six factors used to predictY, similar results from three different computer models are also available. The integrated data from the physical experiment and the three computer models are used to fit an appropriate response surface (regression) model for predicting Y.
Technometrics | 1990
Harry F. Martz; R. A. Waller
A Bayesian procedure is presented for estimating the reliability (or availability) of a complex system of independent binomial series or parallel subsystems and components. Repeated identical components or subsystems are also permitted. The method uses either test or prior data (perhaps both or neither) at the system, subsystem, and component levels. Beta prior distributions are assumed throughout. The method is motivated and illustrated by the following problem. It is required to estimate the unavailability on demand of the low-pressure coolant injection system in a certain U.S. commercial nuclear-power boiling-water reactor. Three data sources are used to calculate the posterior distribution of the overall system demand unavailability from which the required estimates are obtained. The sensitivity of the results to the three data sources is examined. A FORTRAN computer program for implementing the procedure is available.
Health Physics | 2002
Guthrie Miller; Harry F. Martz; Tom T. Little; Ray Guilmette
A technique for computing the exact marginalized (integrated) Poisson likelihood function for counting measurement processes involving a background subtraction is described. An empirical Bayesian method for determining the prior probability distribution of background count rates from population data is recommended and would seem to have important practical advantages. The exact marginalized Poisson likelihood function may be used instead of the commonly used Gaussian approximation. Differences occur in some cases of small numbers of measured counts, which are discussed. Optional use of exact likelihood functions in our Bayesian internal dosimetry codes has been implemented using an interpolation-table approach, which means that there is no computation time penalty except for the initial setup of the interpolation tables.
IEEE Transactions on Reliability | 1994
Vladimir P. Savchuk; Harry F. Martz
The authors develop Bayes estimators for the true binomial survival probability when there exist multiple sources of prior information. For each source of prior information, incomplete (partial) prior information is assumed to exist in the form of either a stated prior mean of p or a stated prior credibility interval on p; p is the parameter about which there is a degree of belief regarding its unknown value, i.e., p is treated as though it were the unknown value of a random variable. Both maximum entropy and maximum posterior risk criteria are used to determine a beta prior for each source. A mixture of these beta priors is then taken as the combined prior, after which Bayes theorem is used to obtain the final mixed beta posterior distribution from which the desired estimates are obtained. Two numerical examples illustrate the method. >
Reliability Engineering & System Safety | 2007
Todd L. Graves; Michael S. Hamada; Richard Klamann; A. C. Koehler; Harry F. Martz
This paper presents a fully Bayesian approach that simultaneously combines non-overlapping (in time) basic event and higher-level event failure data in fault tree quantification with multi-state events. Such higher-level data often correspond to train, subsystem or system failure events. The fully Bayesian approach also automatically propagates the highest-level data to lower levels in the fault tree. A simple example illustrates our approach.
Reliability Engineering & System Safety | 2008
Todd L. Graves; Michael S. Hamada; Richard Klamann; A. C. Koehler; Harry F. Martz
When a system is tested, besides system data, some lower-level data may become available such as a particular subsystem or component was successful or failed. Treating such simultaneous multi-level data as independent is a mistake because they are dependent. In this paper, we show how to handle simultaneous multi-level data correctly in a reliability assessment. We do this by determining what information the simultaneous data provides in terms of the component reliabilities using generalized cut sets. We illustrate this methodology with an example of a low-pressure coolant injection system using a Bayesian approach to make reliability assessments.
IEEE Transactions on Reliability | 1985
Harry F. Martz; B.S. Duran
The Maximus, bootstrap, and Bayes methods can be useful in calculating lower s-confidence limits on system reliability using binomial component test data. The bootstrap and Bayes methods use Monte Carlo simulation, while the Maximus method is closed-form. The Bayes method is based on noninformative component prior distributions. The three methods are compared by means of Monte Carlo simulation using 20 simple through moderately complex examples. The simulation was generally restricted to the region of high reliability components. Sample coverages and average interval lengths are both used as performance measures. In addition to insights regarding the adequacy and desirability of each method, the comparison reveals the following regions of superior performance: 1. The Maximus method is generally superior for: a) moderate to large series systems of reliable components with small quantities of test data per component, and b) small series systems of repeated components. 2. The bootstrap method is generally superior for highly reliable and redundant systems. 3. The Bayes method is generally superior for: a) moderate to large series systems of reliable components with moderate to large numbers of component tests, and b) small series systems of reliable non-repeated components.