Michael S. Hamada
Los Alamos National Laboratory
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Michael S. Hamada.
Journal of Quality Technology | 2001
Chih-Hua Chiao; Michael S. Hamada
Statistically designed experiments have been employed extensively to improve product or process quality and to make products and processes robust. In this paper, we consider experiments with correlated multiple responses whose means, variances, and correlations depend on experimental factors. Analysis of these experiments consists of modeling distributional parameters in terms of the experimental factors and finding factor settings which maximize the probability of being in a specification region, i.e., all responses are simultaneously meeting their respective specifications. The proposed procedure is illustrated with three experiments from the literature.
Journal of Quality Technology | 1995
Sheng-Tsaing Tseng; Michael S. Hamada; Chih-Hua Chiao
While statistically designed experiments have been employed extensively to improve product or process quality, they have been used infrequently for improving reliability. In this paper, we present a case study which used an experiment to improve the rel..
The American Statistician | 2001
Michael S. Hamada; Harry F. Martz; C. S Reese; Alyson G. Wilson
This article shows how a genetic algorithm can be used to find near-optimal Bayesia nexperimental designs for regression models. The design criterion considered is the expected Shannon information gain of the posterior distribution obtained from performing a given experiment compared with the prior distribution. Genetic algorithms are described and then applied to experimental design. The methodology is then illustrated with a wide range of examples: linear and nonlinear regression, single and multiple factors, and normal and Bernoulli distributed experimental data.
Reliability Engineering & System Safety | 2004
Michael S. Hamada; Harry F. Martz; C.S. Reese; Todd L. Graves; V. Johnson; Alyson G. Wilson
Abstract This paper presents a fully Bayesian approach that simultaneously combines non-overlapping (in time) basic event and higher-level event failure data in fault tree quantification. Such higher-level data often correspond to train, subsystem or system failure events. The fully Bayesian approach also automatically propagates the highest-level data to lower levels in the fault tree. A simple example illustrates our approach. The optimal allocation of resources for collecting additional data from a choice of different level events is also presented. The optimization is achieved using a genetic algorithm.
Technometrics | 2004
C. Shane Reese; Alyson G. Wilson; Michael S. Hamada; Harry F. Martz; Kenneth J. Ryan
Scientific investigations frequently involve data from computer experiment(s) as well as related physical experimental data on the same factors and related response variable(s). There may also be one or more expert opinions regarding the response of interest. Traditional statistical approaches consider each of these datasets separately with corresponding separate analyses and fitted statistical models. A compelling argument can be made that better, more precise statistical models can be obtained if the combined data are analyzed simultaneously using a hierarchical Bayesian integrated modeling approach. However, such an integrated approach must recognize important differences, such as possible biases, in these experiments and expert opinions. We illustrate our proposed integrated methodology by using it to model the thermodynamic operation point of a top-spray fluidized bed microencapsulation processing unit. Such units are used in the food industry to tune the effect of functional ingredients and additives. An important thermodynamic response variable of interest, Y, is the steady-state outlet air temperature. In addition to a set of physical experimental observations involving six factors used to predictY, similar results from three different computer models are also available. The integrated data from the physical experiment and the three computer models are used to fit an appropriate response surface (regression) model for predicting Y.
Statistical Science | 2006
Alyson G. Wilson; Todd L. Graves; Michael S. Hamada; C. Shane Reese
The systems that statisticians are asked to assess, such as nuclear weapons, infrastructure networks, supercomputer codes and munitions, have become increasingly complex. It is often costly to conduct full system tests. As such, we present a review of methodology that has been proposed for addressing system reliability with limited full system testing. The first approaches presented in this paper are concerned with the combination of multiple sources of information to assess the reliability of a single component. The second general set of methodology addresses the combination of multiple levels of data to determine system reliability. We then present developments for complex systems beyond traditional series/parallel representations through the use of Bayesian networks and flowgraph models. We also include methodological contributions to resource allocation considerations for system relability assessment. We illustrate each method with applications primarily encountered at Los Alamos National Laboratory.
Journal of Quality Technology | 2000
Michael S. Hamada; Sam Weerahandi
Good measurement systems are an important requirement for a successful quality improvement or statistical process control program. A measurement system is assessed by performing a designed experiment known as a gauge repeatability and reproducibility (R & R) study. Confidence intervals for the parameters which describe measurement system quality are an important part of analyzing the data from a gauge R & R study. In this paper, we show how confidence intervals can easily be obtained using the recently developed generalized inference methodology, which can be calculated by exact numerical integration or can be approximated to any desired accuracy using simulation. The methodology is demonstrated on data from two gauge R & R studies based on two-way layouts. The approach is simple and general enough to extend the results to higher-way layouts.
Journal of Quality Technology | 2000
Kenny Ye; Michael S. Hamada
The Lenth method is an objective method for testing effects from unreplicated factorial designs and eliminates the subjectivity in using a half-normal plot. The Lenth statistics are computed for the factorial effects and compared to corresponding critical values. Since the distribution of the Lenth statistics is not mathematically tractable, we propose a simple simulation method to estimate the critical values. Confidence intervals for the estimated critical values can also easily be obtained. Tables of critical values are provided for a large number of designs, and their use is demonstrated with data from three experiments. The proposed method can also be adapted to estimate critical values for other methods.
Quality and Reliability Engineering International | 1996
Chih-Hua Chiao; Michael S. Hamada
Taguchis robust design provides an important paradigm for producing robust products. There are many successful applications of this paradigm, but few have dealt with reliability, i.e. when the quality characteristic is lifetime. In this paper, an actual experiment is presented which was performed to achieve robust reliability of light emitting diodes. Three major factors chosen from many potentially important manufacturing factors and one noise factor were investigated. For light emitting diodes, failure occurs when their luminosity or light intensity fall below a specified level. An interesting feature of this experiment is the periodic monitoring of the luminosity. The paper shows how the luminositys degradation over time provides a practical way to achieve robust reliability of light emitting diodes which are already highly reliable.
Algorithms | 2009
Tom Burr; Michael S. Hamada
The performance of Radio-Isotope Identification (RIID) algorithms using NaI-based γ spectroscopy is increasingly important. For example, sensors at locations that screen for illicit nuclear material rely on isotope identification using NaI detectors to distinguish innocent nuisance alarms, arising from naturally occurring radioactive material, from alarms arising from threat isotopes. Recent data collections for RIID testing consist of repeat measurements for each of several measurement scenarios to test RIID algorithms. It is anticipated that vendors can modify their algorithms on the basis of performance on chosen measurement scenarios and then test modified algorithms on data for other measurement scenarios. It is therefore timely to review the current status of RIID algorithms on NaI detectors. This review describes γ spectra from NaI detectors, measurement issues and challenges, current RIID algorithms, data preprocessing steps, the role and current quality of synthetic spectra, and opportunities for improvements.