Gregory Campbell
Center for Devices and Radiological Health
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gregory Campbell.
Academic Radiology | 2000
Sergey V. Beiden; Robert F. Wagner; Gregory Campbell
RATIONALE AND OBJECTIVES The purpose of this study was to develop an alternative approach to random-effects, receiver operating characteristic analysis inspired by a general formulation of components-of-variance models. The alternative approach is a higher-order generalization of the Dorfman, Berbaum, and Metz (DBM) approach that yields additional information on the variance structure of the problem. MATERIALS AND METHODS Six population experiments were designed to determine the six variance components in the DBM model. For practical problems, in which only a finite set of readers and patients are available, six analogous bootstrap experiments may be substituted for the population experiments to estimate the variance components. Monte Carlo simulations were performed on the population experiments, and those results were compared with the corresponding multiple-bootstrap estimates and those obtained with the DBM approach. Confidence intervals on the difference of ROC parameters for competing diagnostic modalities were estimated, and corresponding comparisons were made. RESULTS For mean values, the agreement of present estimates of variance structures with population results was excellent and, when suitably weighted and mixed, similar to or closer than that with the DBM method. For many variance structures, the confidence intervals in this study for the difference in ROC area between modalities were comparable to those with the DBM method. When reader variability was large, however, mean confidence intervals from this study were tighter than those with the DBM method and closer to population results. CONCLUSION The jackknife approach of DBM provides a linear approximation to receiver-operating-characteristic statistics that are intrinsically nonlinear. The multiple-bootstrap technique of this study, however, provides a more general, nonparametric, maximum-likelihood approach. It also yields estimates of the variance structure previously unavailable.
Academic Radiology | 2002
Robert F. Wagner; Sergey V. Beiden; Gregory Campbell; Charles E. Metz; William M. Sacks
In the last 2 decades major advances have been made in the field of assessment methods for medical imaging and computer-assist systems through the use of the paradigm of the receiver operating characteristic (ROC) curve. In the most recent decade this methodology was extended to embrace the complication of reader variability through advances in the multiple-reader, multiple-case (MRMC) ROC measurement and analysis paradigm. Although this approach has been widely adopted by the imaging research community, some investigators appear averse to it, possibly from concern that it could place a greater burden on the scarce resources of patient cases and readers compared to the requirements of alternative methods. The present communication argues, however, that the MRMC ROC approach to assessment in the context of reader variability may be the most resource-efficient approach available. Moreover, alternative approaches may also be statistically uninterpretable with regard to estimated summary measures of performance and their uncertainties. The authors propose that the MRMC ROC approach be considered even more widely by the larger community with responsibilities for the introduction and dissemination of medical imaging technologies to society. General principles of study design are reviewed, and important contemporary clinical trials are used as examples.
Academic Radiology | 2001
Sergey V. Beiden; Robert F. Wagner; Gregory Campbell; Charles E. Metz; Yulei Jiang
RATIONALE AND OBJECTIVES Several of the authors have previously published an analysis of multiple sources of uncertainty in the receiver operating characteristic (ROC) assessment and comparison of diagnostic modalities. The analysis assumed that the components of variance were the same for the modalities under comparison. The purpose of the present work is to obtain a generalization that does not require that assumption. MATERIALS AND METHODS The generalization is achieved by splitting three of the six components of variance in the previous model into modality-dependent contributions. Two distinct formulations of this approach can be obtained from alternative choices of the three components to be split; however, a one-to-one relationship exists between the magnitudes of the components estimated from these two formulations. RESULTS The method is applied to a study of multiple readers, with and without the aid of a computer-assist modality. performing the task of discriminating between benign and malignant clusters of microcalcifications. Analysis according to the first method of splitting shows large decreases in the reader and reader-by-case components of variance when the computer assist is used by the readers. Analysis in terms of the alternative splitting shows large decreases in the corresponding modality-interaction components. CONCLUSION A solution to the problem of multivariate ROC analysis without the assumption of equal variance structure across modalities has been provided. Alternative formulations lead to consistent results related by a one-to-one mapping. A surprising result is that estimates of confidence intervals and numbers of cases and readers required for a specified confidence interval remain the same in the more general model as in the restricted model.
Medical Imaging 2000: Image Perception and Performance | 2000
Sergey V. Beiden; Gregory Campbell; Kristen L. Meier; Robert F. Wagner
Henkelman, Kay, and Bronskill (HKB) showed that although the problem of ROC analysis without truth is underconstrained and thus not uniquely solvable in one dimension (one diagnostic test), it is in principle solvable in two or more dimensions. However, they gave no analysis of the resulting uncertainties. The present work provides a maximum-likelihood solution using the EM (expectation-maximization) algorithm for the two- dimensional case. We also provide an analysis of uncertainties in terms of Monte Carlo simulations as well as estimates based on Fisher Information Matrices for the complete- and the missing-data problem. We find that the number of patients required for a given precision of estimate for the truth- unknown problem is a very large multiple of that required for the corresponding truth-known case.
Journal of Biopharmaceutical Statistics | 2007
Gregory Campbell
Medical devices play a vital role in peoples lives as these products are revolutionizing medicine with breathtaking advances in both the treatment and the detection of many diseases. While a similar, primarily therapeutic, revolution is ongoing in the pharmaceutical world; the focus here is the effect this device revolution is having on the statistical world. The similarities and differences between medical devices and pharmaceutical drugs are explored in terms of their natures, industries, and how they are regulated in the U.S. and globally. Statistical issues concerning the evaluation of devices versus those of drugs are compared and contrasted. These trends are creating new challenges for the statistical world in the development and evaluation of these new medical products.
Journal of Biopharmaceutical Statistics | 2011
Gregory Campbell; Gene Pennello; Lilly Q. Yue
Handling missing data is an important consideration in the analysis of data from all kinds of medical device studies. Missing data in medical device studies can arise for all the reasons one might expect in pharmaceutical clinical trials. In addition, they occur by design, in nonrandomized device studies, and in evaluations of diagnostic tests. For dichotomous endpoints, a tipping point analysis can be used to examine nonparametrically the sensitivity of conclusions to missing data. In general, sensitivity analysis is an important tool to study deviations from simple assumptions about missing data, such as the data being missing at random. Approaches to missing data in Bayesian trials are discussed, including sensitivity analysis. Many types of missing data that can occur with diagnostic test evaluations are surveyed. Careful planning and conduct are recommended to minimize missing data. Although difficult, the prespecification of all missing data analysis strategies is encouraged before any data are collected.
Clinical Trials | 2005
Gregory Campbell
In 1996, Dr Bruce Burlington, the then Director of the FDAs Center for Devices and Radiological Health (CDRH), along with Dr Larry Kessler, the then Director of the Office of Surveillance and Biometrics (OSB, CDRH) and this author, as Director of CDRHs Division of Biostatistics in OSB, began to grapple with whether and how Bayesian statistics might be brought to bear in the premnarket evaluationi of medical devices within the FDA. A team within the Division of Biostatistics (DBS) was formed to consider this question and that led to the 18 month visit by Dr Don Malec, then of the National Center for Health Statistics. Professor DoIn Berry was commissioned to write a short white paper for CDRH; the result was the detailed Duke technical report [1]. In 1998, Drs Telba Irony and Gene Pennello were recruited to the CDRH statistical staff for the CDRH Bayesian effort. Over the years a number of Bayesians have visited CDRH to give seminars and short courses for both statisticians and nonstatisticians. These include Profs Don Berry, Steve Goodman, Frank Harrell, Jay Herson, Joseph Kadane, Tom Louis, Don Rubin, Robert Kass and Larry Wasserman.
Journal of Biopharmaceutical Statistics | 2004
Gregory Campbell
Abstract The genomics revolution is reverberating throughout the worlds of pharmaceutical drugs, genetic testing and statistical science. This revolution, which uses single nucleotide polymorphisms (SNPs) and gene expression technology, including cDNA and oligonucleotide microarrays, for a range of tests from home-brews to high-complexity lab kits, can allow the selection or exclusion of patients for therapy (responders or poor metabolizers). The wide variety of US regulatory mechanisms for these tests is discussed. Clinical studies to evaluate the performance of such tests need to follow statistical principles for sound diagnostic test design. Statistical methodology to evaluate such studies can be wide ranging, including receiver operating characteristic (ROC) methodology, logistic regression, discriminant analysis, multiple comparison procedures resampling, Bayesian hierarchical modeling, recursive partitioning, as well as exploratory techniques such as data mining. Recent examples of approved genetic tests are discussed.
Proceedings of SPIE- The International Society for Optical Engineering | 2001
Sergey V. Beiden; Robert F. Wagner; Gregory Campbell; Charles E. Metz; Yulei Jiang; Heang Ping Chan
The metaphor of the Holy Grail is used here to refer to the classic and elusive problem in medical imaging of predicting the ranking of the clinical performance of competing imaging modalities from the ranking obtained from physical laboratory measurements and signal-detection analysis, or from simple phantom studies. We show how the use of the multiple-reader, multiple-case (MRMC) ROC paradigm and new analytical techniques allows this masking effect to be quantified in terms of components-of-variance models. Moreover, we demonstrate how the components of variance associated with reader variability may be reduced when readers have the benefit of computer-assist reading aids. The remaining variability will be due to the case components, and these reflect the contribution of the technology without the masking effect of the reader. This suggests that prediction of clinical ranking of imaging systems in terms of physical measurements may become a much more tractable task in a world that includes MRMC ROC analysis of performance of radiologists with the advantage of computer-assisted reading.
Academic Radiology | 2011
Frank W. Samuelson; Brandon D. Gallas; Kyle J. Myers; Nicholas Petrick; Paul Pinsky; Berkman Sahiner; Gregory Campbell; Gene Pennello
Dear editor: The article by Gur et al (1) presents interesting data for those who perform reader studies of radiological devices. The article reports differences between two methods of estimating a change in the probability of correct discrimination, or area under the receiver operating characteristic (ROC) curve (AUC). It uses data from one particular study (2) in which the breast cancer detection of full-field digital mammography (FFDM) was compared to that of FFDM plus the new investigational use technology of digital breast tomosynthesis (DBT). Both estimates use a nonparametric empirical method, but one estimate uses multicategory or semicontinuous rating data, whereas the other uses two-category or binary data. We want to highlight three points relevant to this study and other controlled studies undertaken before a technology is in wide use in the clinical setting.