Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John W. Seaman is active.

Publication


Featured researches published by John W. Seaman.


The American Statistician | 2012

Hidden Dangers of Specifying Noninformative Priors

John W. Seaman; James D. Stamey

“Noninformative” priors are widely used in Bayesian inference. Diffuse priors are often placed on parameters that are components of some function of interest. That function may, of course, have a prior distribution that is highly informative, in contrast to the joint prior placed on its arguments, resulting in unintended influence on the posterior for the function. This problem is not always recognized by users of “noninformative” priors. We consider several examples of this problem. We also suggest methods for handling such induced priors.


Biometrical Journal | 2008

Binary regression with misclassified response and covariate subject to measurement error: a bayesian approach.

Anna McGlothlin; James D. Stamey; John W. Seaman

We consider a Bayesian analysis for modeling a binary response that is subject to misclassification. Additionally, an explanatory variable is assumed to be unobservable, but measurements are available on its surrogate. A binary regression model is developed to incorporate the measurement error in the covariate as well as the misclassification in the response. Unlike existing methods, no model parameters need be assumed known. Markov chain Monte Carlo methods are utilized to perform the necessary computations. The methods developed are illustrated using atomic bomb survival data. A simulation experiment explores advantages of the approach.


Value in Health | 2013

Evaluating the impact of unmeasured confounding with internal validation data: an example cost evaluation in type 2 diabetes.

Douglas Faries; Xiaomei Peng; Manjiri Pawaskar; Karen L. Price; James D. Stamey; John W. Seaman

The quantitative assessment of the potential influence of unmeasured confounders in the analysis of observational data is rare, despite reliance on the no unmeasured confounders assumption. In a recent comparison of costs of care between two treatments for type 2 diabetes using a health care claims database, propensity score matching was implemented to adjust for selection bias though it was noted that information on baseline glycemic control was not available for the propensity model. Using data from a linked laboratory file, data on this potential unmeasured confounder were obtained for a small subset of the original sample. By using this information, we demonstrate how Bayesian modeling, propensity score calibration, and multiple imputation can utilize this additional information to perform sensitivity analyses to quantitatively assess the potential impact of unmeasured confounding. Bayesian regression models were developed to utilize the internal validation data as informative prior distributions for all parameters, retaining information on the correlation between the confounder and other covariates. While assumptions supporting the use of propensity score calibration were not met in this sample, the use of Bayesian modeling and multiple imputation provided consistent results, suggesting that the lack of data on the unmeasured confounder did not have a strong impact on the original analysis, due to the lack of strong correlation between the confounder and the cost outcome variable. Bayesian modeling with informative priors and multiple imputation may be useful tools for unmeasured confounding sensitivity analysis in these situations. Further research to understand the operating characteristics of these methods in a variety of situations, however, remains.


Pharmaceutical Statistics | 2014

Bayesian modeling of cost-effectiveness studies with unmeasured confounding: a simulation study

James D. Stamey; Daniel P. Beavers; Douglas Faries; Karen L. Price; John W. Seaman

Unmeasured confounding is a common problem in observational studies. Failing to account for unmeasured confounding can result in biased point estimators and poor performance of hypothesis tests and interval estimators. We provide examples of the impacts of unmeasured confounding on cost-effectiveness analyses using observational data along with a Bayesian approach to correct estimation. Assuming validation data are available, we propose a Bayesian approach to correct cost-effectiveness studies for unmeasured confounding. We consider the cases where both cost and effectiveness are assumed to have a normal distribution and when costs are gamma distributed and effectiveness is normally distributed. Simulation studies were conducted to determine the impact of ignoring the unmeasured confounder and to determine the size of the validation data required to obtain valid inferences.


Journal of Biopharmaceutical Statistics | 2007

Bayesian Estimation of Intervention Effect with Pre- and Post-Misclassified Binomial Data

James D. Stamey; John W. Seaman; Dean M. Young

We consider studies in which an enrolled subject tests positive on a fallible test. After an intervention, disease status is re-diagnosed with the same fallible instrument. Potential misclassification in the diagnostic test causes regression to the mean that biases inferences about the true intervention effect. The existing likelihood approach suffers in situations where either sensitivity or specificity is near 1. In such cases, common in many diagnostic tests, confidence interval coverage can often be below nominal for the likelihood approach. Another potential drawback of the maximum likelihood estimator (MLE) method is that it requires validation data to eliminate identification problems. We propose a Bayesian approach that offers improved performance in general, but substantially better performance than the MLE method in the realistic case of a highly accurate diagnostic test. We obtain this superior performance using no more information than that employed in the likelihood method. Our approach is also more flexible, doing without validation data if necessary, but accommodating multiple sources of information, if available, thereby systematically eliminating identification problems. We show via a simulation study that our Bayesian approach outperforms the MLE method, especially when the diagnostic test has high sensitivity, specificity, or both. We also consider a real data example for which the diagnostic test specificity is close to 1 (false positive probability close to 0).


Pharmacoepidemiology and Drug Safety | 2016

A Bayesian sensitivity analysis to evaluate the impact of unmeasured confounding with external data: a real world comparative effectiveness study in osteoporosis

Xiang Zhang; Douglas Faries; Natalie N. Boytsov; James D. Stamey; John W. Seaman

Observational studies are frequently used to assess the effectiveness of medical interventions in routine clinical practice. However, the use of observational data for comparative effectiveness is challenged by selection bias and the potential of unmeasured confounding. This is especially problematic for analyses using a health care administrative database, in which key clinical measures are often not available. This paper provides an approach to conducting a sensitivity analyses to investigate the impact of unmeasured confounding in observational studies.


Journal of Psychiatric Research | 2009

Bayesian adaptive non-inferiority with safety assessment: Retrospective case study to highlight potential benefits and limitations of the approach

Melissa E. Spann; Stacy R. Lindborg; John W. Seaman; Robert W. Baker; Eduardo Dunayevich; Alan Breier

Adaptive trial design applied to randomized clinical trials of psychiatric medicines offers the potential to make clinical trials more efficient. In the current analysis, we retrospectively applied Bayesian adaptive allocation methods to a case study in agitated patients with schizophrenia and related diseases. The original study used a randomized, double-blind, parallel design. The objective of this analysis was to demonstrate the potential benefits of Bayesian adaptive designs by shortening the study duration and therefore limiting patient exposure to ineffective placebo or an active comparator with a known side effect. Bayesian methods allowed us to fully leverage historical data along with data observed as the study was ongoing to calculate predictive probabilities of patient response to treatment without experiencing a specified side effect. Using the Bayesian adaptive approach would have required less than half the number of patients as the original study to draw the same conclusion. Sample size was reduced from 311 to 156 patients, thereby decreasing the number of patients exposed to placebo from 54 to 30 and the number exposed to the active control with a known side effect from 126 to 60.


Journal of Statistical Computation and Simulation | 2004

A note on tests for interaction in quantal response data

Melinda A. Holt; James D. Stamey; John W. Seaman; Dean M. Young

There are few distribution-free methods for detecting interaction in fixed-dose trials involving quantal response data, despite the fact that such trials are common. We present three new tests to address this issue, including a simple bootstrap procedure. We examine the power of the likelihood ratio test and our new bootstrap test statistic using an innovative linear extrapolation power-estimation technique described in Boos, D. D. and Zhang, J. (2000) in Monte Carlo evaluation of resampling-based hypothesis tests. Journal of the American Statistical Association, 95, 486–492.


Epidemiology, biostatistics, and public health | 2017

A Bayesian approach to correct for unmeasured or semi-unmeasured confounding in survival data using multiple validation data sets

Wencong Chen; Xiang Zhang; Douglas Faries; Wei Shen; John W. Seaman; James D. Stamey

Purpose: The existence of unmeasured confounding can clearly undermine the validity of an observational study. Methods of conducting sensitivity analyses to evaluate the impact of unmeasured confounding are well established. However, application of such methods to survival data (“time-to-event” outcomes) have received little attention in the literature. The purpose of this study is to propose a novel Bayesian method to account for unmeasured confounding for survival data. Methods: The Bayesian method is proposed under an assumption that the supplementary information on unmeasured confounding in the form of internal validation data, external validation data or expert elicited prior distributions is available. The method for incorporating such information to Cox proportional hazard model is described.xa0 Simulation studies are performed based on the recently published instrumental variable method to assess the impact of unmeasured confounding and to illustrate the improvement of the proposed method over the naive model which ignores unmeasured confounding. Results: Simulation studies illustrate the impact of ignoring the unmeasured confounding and the effectiveness of our Bayesian approach. The corrected model had significantly less bias and coverage of 95% intervals much closer to nominal. Conclusion: The proposed Bayesian method provides a useful and flexible tool in incorporating different types of supplemental information on unmeasured confounding to adjust the treatment estimates when the outcome is survival data.xa0 It out-performed the naive model in simulation studies based on a real world study.


Pda Journal of Pharmaceutical Science and Technology | 2016

A Bayesian Approach to Determination of F, D, and z values used in Steam Sterilization Validation

Paul Faya; James D. Stamey; John W. Seaman

For manufacturers of sterile drug products, steam sterilization is a common method used to provide assurance of the sterility of manufacturing equipment and products. The validation of sterilization processes is a regulatory requirement and relies upon the estimation of key resistance parameters of microorganisms. Traditional methods have relied upon point estimates for the resistance parameters. In this paper, we propose a Bayesian method for estimation of the well-known DT, z, and Fo values that are used in the development and validation of sterilization processes. A Bayesian approach allows the uncertainty about these values to be modeled using probability distributions, thereby providing a fully risk-based approach to measures of sterility assurance. An example is given using the survivor curve and fraction negative methods for estimation of resistance parameters, and we present a means by which a probabilistic conclusion can be made regarding the ability of a process to achieve a specified sterility criterion. LAY ABSTRACT: For manufacturers of sterile drug products, steam sterilization is a common method used to provide assurance of the sterility of manufacturing equipment and products. The validation of sterilization processes is a regulatory requirement and relies upon the estimation of key resistance parameters of microorganisms. Traditional methods have relied upon point estimates for the resistance parameters. In this paper, we propose a Bayesian method for estimation of the critical process parameters that are evaluated in the development and validation of sterilization processes. A Bayesian approach allows the uncertainty about these parameters to be modeled using probability distributions, thereby providing a fully risk-based approach to measures of sterility assurance. An example is given using the survivor curve and fraction negative methods for estimation of resistance parameters, and we present a means by which a probabilistic conclusion can be made regarding the ability of a process to achieve a specified sterility criterion.

Collaboration


Dive into the John W. Seaman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge