Bradley P. Carlin
University of Minnesota
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Bradley P. Carlin.
Journal of the American Statistical Association | 1996
Mary Kathryn Cowles; Bradley P. Carlin
Abstract A critical issue for users of Markov chain Monte Carlo (MCMC) methods in applications is how to determine when it is safe to stop sampling and use the samples to estimate characteristics of the distribution of interest. Research into methods of computing theoretical convergence bounds holds promise for the future but to date has yielded relatively little of practical use in applied work. Consequently, most MCMC users address the convergence problem by applying diagnostic tools to the output produced by running their samplers. After giving a brief overview of the area, we provide an expository review of 13 convergence diagnostics, describing the theoretical basis and practical implementation of each. We then compare their performance in two simple models and conclude that all of the methods can fail to detect the sorts of convergence failure that they were designed to identify. We thus recommend a combination of strategies aimed at evaluating and accelerating MCMC sampler convergence, including ap...
Journal of the American Statistical Association | 1992
Bradley P. Carlin; Nicholas G. Polson; David S. Stoffer
Abstract A solution to multivariate state-space modeling, forecasting, and smoothing is discussed. We allow for the possibilities of nonnormal errors and nonlinear functionals in the state equation, the observational equation, or both. An adaptive Monte Carlo integration technique known as the Gibbs sampler is proposed as a mechanism for implementing a conceptually and computationally simple solution in such a framework. The methodology is a general strategy for obtaining marginal posterior densities of coefficients in the model or of any of the unknown elements of the state space. Missing data problems (including the k-step ahead prediction problem) also are easily incorporated into this framework. We illustrate the broad applicability of our approach with two examples: a problem involving nonnormal error distributions in a linear model setting and a one-step ahead prediction problem in a situation where both the state and observational equations are nonlinear and involve unknown parameters.
Applied statistics | 1992
Bradley P. Carlin; Alan E. Gelfand; A. F. M. Smith
SUMMARY A general approach to hierarchical Bayes changepoint models is presented. In particular, desired marginal posterior densities are obtained utilizing the Gibbs sampler, an iterative Monte Carlo method. This approach avoids sophisticated analytic and numerical high dimensional integration procedures. We include an application to changing regressions, changing Poisson processes and changing Markov chains. Within these contexts we handle several previously inaccessible problems.
The American Statistician | 1998
Robert E. Kass; Bradley P. Carlin; Andrew Gelman; Radford M. Neal
Abstract Markov chain Monte Carlo (MCMC) methods make possible the use of flexible Bayesian models that would otherwise be computationally infeasible. In recent years, a great variety of such applications have been described in the literature. Applied statisticians who are new to these methods may have several questions and concerns, however: How much effort and expertise are needed to design and use a Markov chain sampler? How much confidence can one have in the answers that MCMC produces? How does the use of MCMC affect the rest of the model-building process? At the Joint Statistical Meetings in August, 1996, a panel of experienced MCMC users discussed these and other issues, as well as various “tricks of the trade” This article is an edited recreation of that discussion. Its purpose is to offer advice and guidance to novice users of MCMC—and to not-so-novice users as well. Topics include building confidence in simulation results, methods for speeding and assessing convergence, estimating standard error...
The American Statistician | 2004
Xu Guo; Bradley P. Carlin
Many clinical trials and other medical and reliability studies generate both longitudinal (repeated measurement) and survival (time to event) data. Many well-established methods exist for analyzing such data separately, but these may be inappropriate when the longitudinal variable is correlated with patient health status, hence the survival endpoint (as well as the possibility of study dropout). To remedy this, an earlier article proposed a joint model for longitudinal and survival data, obtaining maximum likelihood estimates via the EM algorithm. The longitudinal and survival responses are assumed independent given a linking latent bivariate Gaussian process and available covariates. We develop a fully Bayesian version of this approach, implemented via Markov chain Monte Carlo (MCMC) methods. We use the approach to jointly model the longitudinal and survival data from an AIDS clinical trial comparing two treatments, didanosine (ddI) and zalcitabine (ddC). Despite the complexity of the model, we find it to be relatively straightforward to implement and understand using the WinBUGS software. Wecompare our results to those obtained from readily available alternatives in SAS Procs MIXED, NLMIXED, PHREG, and LIFEREG, as well as Bayesian analogues of these traditional separate likelihood methods. The joint Bayesian approach appears to offer significantly improved and enhanced estimation of median survival times and other parameters of interest, as well as simpler coding and comparable runtimes.
Journal of the American Statistical Association | 2001
Cong Han; Bradley P. Carlin
The problem of calculating posterior probabilities for a collection of competing models and associated Bayes factors continues to be a formidable challenge for applied Bayesian statisticians. Current approaches that take advantage of modern Markov chain Monte Carlo computing methods include those that attempt to sample over some form of the joint space created by the model indicators and the parameters for each model, others that sample over the model space alone, and still others that attempt to estimate the marginal likelihood of each model directly (because the collection of these is equivalent to the collection of model probabilities themselves). We review several methods and compare them in the context of three examples: a simple regression example, a more challenging hierarchical longitudinal model, and a binary data latent variable model. We find that the joint model-parameter space search methods perform adequately but can be difficult to program and tune, whereas the marginal likelihood methods often are less troublesome and require less additional coding. Our results suggest that the latter methods may be most appropriate for practitioners working in many standard model choice settings, but the former remain important for comparing models of varying dimension (e.g., multiple changepoint models) or models whose parameters cannot easily be updated in relatively few blocks. We caution, however, that all methods we compared require significant human and computer effort, and this suggests that less formal Bayesian model choice methods may offer a more realistic alternative in many cases.
Statistics and Computing | 1999
Siddhartha Chib; Bradley P. Carlin
Markov chain Monte Carlo (MCMC) algorithms have revolutionized Bayesian practice. In their simplest form (i.e., when parameters are updated one at a time) they are, however, often slow to converge when applied to high-dimensional statistical models. A remedy for this problem is to block the parameters into groups, which are then updated simultaneously using either a Gibbs or Metropolis-Hastings step. In this paper we construct several (partially and fully blocked) MCMC algorithms for minimizing the autocorrelation in MCMC samples arising from important classes of longitudinal data models. We exploit an identity used by Chib (1995) in the context of Bayes factor computation to show how the parameters in a general linear mixed model may be updated in a single block, improving convergence and producing essentially independent draws from the posterior of the parameters of interest. We also investigate the value of blocking in non-Gaussian mixed models, as well as in a class of binary response data longitudinal models. We illustrate the approaches in detail with three real-data examples.
Journal of the American Statistical Association | 1998
Malay Ghosh; Kannan Natarajan; T. W. F. Stroud; Bradley P. Carlin
Abstract Bayesian methods have been used quite extensively in recent years for solving small-area estimation problems. Particularly effective in this regard has been the hierarchical or empirical Bayes approach, which is especially suitable for a systematic connection of local areas through models. However, the development to date has mainly concentrated on continuous-valued variates. Often the survey data are discrete or categorical, so that hierarchical or empirical Bayes techniques designed for continuous variates are inappropriate. This article considers hierarchical Bayes generalized linear models for a unified analysis of both discrete and continuous data. A general theorem is provided that ensures the propriety of posteriors under diffuse priors. This result is then extended to the case of spatial generalized linear models. The hierarchical Bayes procedure is implemented via Markov chain Monte Carlo integration techniques. Two examples (one featuring spatial correlation structure) are given to illu...
Journal of the American Statistical Association | 1992
Nicholas Lange; Bradley P. Carlin; Alan E. Gelfand
Abstract Taking the absolute number of CD4 T-cells as a marker of disease progression for persons infected with the human immunodeficiency virus (HIV), we model longitudinal series of such counts for a sample of 327 subjects in the San Francisco Mens Health Study (Waves 1–8, excluding zidovudine cases). We conduct a fully Bayesian analysis of these data. We employ individual level nonlinear models incorporating such critical features as incomplete and unbalanced data, population covariates (age at study entry and an indicator of self-reported herpes simplex virus infection), unobserved random change points, heterogeneous variances, and errors in variables. We construct prior distributions using results of previously published work from several different sources and data from HIV-negative men in this study. We also develop an approach to Bayesian model choice and individual prediction. Our analysis provides marginal posterior distributions for all population parameters in our model for this cohort. Using ...
Statistics in Medicine | 2000
Lynn E. Eberly; Bradley P. Carlin
The marked increase in popularity of Bayesian methods in statistical practice over the last decade owes much to the simultaneous development of Markov chain Monte Carlo (MCMC) methods for the evaluation of requisite posterior distributions. However, along with this increase in computing power has come the temptation to fit models larger than the data can readily support, meaning that often the propriety of the posterior distributions for certain parameters depends on the propriety of the associated prior distributions. An important example arises in spatial modelling, wherein separate random effects for capturing unstructured heterogeneity and spatial clustering are of substantive interest, even though only their sum is well identified by the data. Increasing the informative content of the associated prior distributions offers an obvious remedy, but one that hampers parameter interpretability and may also significantly slow the convergence of the MCMC algorithm. In this paper we investigate the relationship among identifiability, Bayesian learning and MCMC convergence rates for a common class of spatial models, in order to provide guidance for prior selection and algorithm tuning. We are able to elucidate the key issues with relatively simple examples, and also illustrate the varying impacts of covariates, outliers and algorithm starting values on the resulting algorithms and posterior distributions.