Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gene Pennello is active.

Publication


Featured researches published by Gene Pennello.


Statistical Methods in Medical Research | 2015

Quantitative imaging biomarkers: A review of statistical methods for technical performance assessment

David Raunig; Lisa M. McShane; Gene Pennello; Constantine Gatsonis; Paul L. Carson; James T. Voyvodic; Richard Wahl; Brenda F. Kurland; Adam J. Schwarz; Mithat Gonen; Gudrun Zahlmann; Marina Kondratovich; Kevin O’Donnell; Nicholas Petrick; Patricia E. Cole; Brian S. Garra; Daniel C. Sullivan

Technological developments and greater rigor in the quantitative measurement of biological features in medical images have given rise to an increased interest in using quantitative imaging biomarkers to measure changes in these features. Critical to the performance of a quantitative imaging biomarker in preclinical or clinical settings are three primary metrology areas of interest: measurement linearity and bias, repeatability, and the ability to consistently reproduce equivalent results when conditions change, as would be expected in any clinical trial. Unfortunately, performance studies to date differ greatly in designs, analysis method, and metrics used to assess a quantitative imaging biomarker for clinical use. It is therefore difficult or not possible to integrate results from different studies or to use reported results to design studies. The Radiological Society of North America and the Quantitative Imaging Biomarker Alliance with technical, radiological, and statistical experts developed a set of technical performance analysis methods, metrics, and study designs that provide terminology, metrics, and methods consistent with widely accepted metrological standards. This document provides a consistent framework for the conduct and evaluation of quantitative imaging biomarker performance studies so that results from multiple studies can be compared, contrasted, or combined.


Clinical Infectious Diseases | 2015

Desirability of Outcome Ranking (DOOR) and Response Adjusted for Duration of Antibiotic Risk (RADAR)

Scott R. Evans; Daniel Rubin; Dean Follmann; Gene Pennello; W. Charles Huskins; John H. Powers; David A. Schoenfeld; Christy Chuang-Stein; Sara E. Cosgrove; Vance G. Fowler; Ebbing Lautenbach; Henry F. Chambers

Clinical trials that compare strategies to optimize antibiotic use are of critical importance but are limited by competing risks that distort outcome interpretation, complexities of noninferiority trials, large sample sizes, and inadequate evaluation of benefits and harms at the patient level. The Antibacterial Resistance Leadership Group strives to overcome these challenges through innovative trial design. Response adjusted for duration of antibiotic risk (RADAR) is a novel methodology utilizing a superiority design and a 2-step process: (1) categorizing patients into an overall clinical outcome (based on benefits and harms), and (2) ranking patients with respect to a desirability of outcome ranking (DOOR). DOORs are constructed by assigning higher ranks to patients with (1) better overall clinical outcomes and (2) shorter durations of antibiotic use for similar overall clinical outcomes. DOOR distributions are compared between antibiotic use strategies. The probability that a randomly selected patient will have a better DOOR if assigned to the new strategy is estimated. DOOR/RADAR represents a new paradigm in assessing the risks and benefits of new strategies to optimize antibiotic use.


Statistical Methods in Medical Research | 2015

Quantitative imaging biomarkers: A review of statistical methods for computer algorithm comparisons

Nancy A. Obuchowski; Anthony P. Reeves; Erich P. Huang; Xiao Feng Wang; Andrew J. Buckler; Hyun J. Kim; Huiman X. Barnhart; Edward F. Jackson; Maryellen L. Giger; Gene Pennello; Alicia Y. Toledano; Jayashree Kalpathy-Cramer; Tatiyana V. Apanasovich; Paul E. Kinahan; Kyle J. Myers; Dmitry B. Goldgof; Daniel P. Barboriak; Robert J. Gillies; Lawrence H. Schwartz; Daniel C. Sullivan

Quantitative biomarkers from medical images are becoming important tools for clinical diagnosis, staging, monitoring, treatment planning, and development of new therapies. While there is a rich history of the development of quantitative imaging biomarker (QIB) techniques, little attention has been paid to the validation and comparison of the computer algorithms that implement the QIB measurements. In this paper we provide a framework for QIB algorithm comparisons. We first review and compare various study designs, including designs with the true value (e.g. phantoms, digital reference images, and zero-change studies), designs with a reference standard (e.g. studies testing equivalence with a reference standard), and designs without a reference standard (e.g. agreement studies and studies of algorithm precision). The statistical methods for comparing QIB algorithms are then presented for various study types using both aggregate and disaggregate approaches. We propose a series of steps for establishing the performance of a QIB algorithm, identify limitations in the current statistical literature, and suggest future directions for research.


Journal of Biopharmaceutical Statistics | 2007

Experience with Reviewing Bayesian Medical Device Trials

Gene Pennello; Laura Thompson

The purpose of this paper is to present a statistical reviewers perspective on some technical aspects of reviewing Bayesian medical device trials submitted to the Food and Drug Administration. The discussion reflects the experiences of the authors and should not be misconstrued as official guidance by the FDA. A variety of applications are described, reflecting our experience with therapeutic and diagnostic devices. In addition to Bayesian analysis of trials, Bayesian trial design and Bayesian monitoring are discussed. Analyses were implemented in WinBUGS (http://www.mrc-bsu.cam.ac.uk/bugs/winbugs/contents.shtml), with the code provided.


Statistical Methods in Medical Research | 2015

Statistical issues in the comparison of quantitative imaging biomarker algorithms using pulmonary nodule volume as an example

Nancy A. Obuchowski; Huiman X. Barnhart; Andrew J. Buckler; Gene Pennello; Xiao Feng Wang; Jayashree Kalpathy-Cramer; Hyun J. Kim; Anthony P. Reeves

Quantitative imaging biomarkers are being used increasingly in medicine to diagnose and monitor patients’ disease. The computer algorithms that measure quantitative imaging biomarkers have different technical performance characteristics. In this paper we illustrate the appropriate statistical methods for assessing and comparing the bias, precision, and agreement of computer algorithms. We use data from three studies of pulmonary nodules. The first study is a small phantom study used to illustrate metrics for assessing repeatability. The second study is a large phantom study allowing assessment of four algorithms’ bias and reproducibility for measuring tumor volume and the change in tumor volume. The third study is a small clinical study of patients whose tumors were measured on two occasions. This study allows a direct assessment of six algorithms’ performance for measuring tumor change. With these three examples we compare and contrast study designs and performance metrics, and we illustrate the advantages and limitations of various common statistical methods for quantitative imaging biomarker studies.


Clinical Trials | 2013

Analytical and clinical evaluation of biomarkers assays: When are biomarkers ready for prime time?

Gene Pennello

Background Biomarker assays can be evaluated for analytical performance (ability of assay to measure the biomarker quantity) and clinical performance (ability of assay result to inform of the clinical condition of interest). Additionally, a biomarker assay is said to have clinical utility if it ultimately improves patient outcomes when used as intended. Purpose This article reviews analytical and clinical performance studies of biomarker assay tests and also some designs of clinical utility studies. Results Appropriate design and statistical analysis of analytical and clinical evaluation studies depend on the intended clinical use of the test. Some key aspects to valid performance studies include using subjects who are independent of those used to develop the test, masking users of the test to any other available test or reference results, and including in the primary statistical analysis subjects with unavailable results in an intention-to-diagnose analysis. Ingenuity in study design and analysis may be required for efficient and unbiased estimation of performance. Limitations Performance studies need to be carefully planned as they can be prone to many sources of bias. Analytical inaccuracy can hamper the clinical performance of biomarkers. Conclusions As biomedical research and technology advance, challenges in study design and statistical analysis will continue to emerge for analytical and clinical performance studies of biomarker assays. Although not emphasized in some circles, the analytical performance of a biomarker assay is important to characterize. Analytical performance studies have many study design and statistical analysis challenges that deserve further attention.


Journal of Biopharmaceutical Statistics | 2011

Missing Data in the Regulation of Medical Devices

Gregory Campbell; Gene Pennello; Lilly Q. Yue

Handling missing data is an important consideration in the analysis of data from all kinds of medical device studies. Missing data in medical device studies can arise for all the reasons one might expect in pharmaceutical clinical trials. In addition, they occur by design, in nonrandomized device studies, and in evaluations of diagnostic tests. For dichotomous endpoints, a tipping point analysis can be used to examine nonparametrically the sensitivity of conclusions to missing data. In general, sensitivity analysis is an important tool to study deviations from simple assumptions about missing data, such as the data being missing at random. Approaches to missing data in Bayesian trials are discussed, including sensitivity analysis. Many types of missing data that can occur with diagnostic test evaluations are surveyed. Careful planning and conduct are recommended to minimize missing data. Although difficult, the prespecification of all missing data analysis strategies is encouraged before any data are collected.


Biometrics | 2010

Multiple McNemar Tests

Peter H. Westfall; James Troendle; Gene Pennello

Methods for performing multiple tests of paired proportions are described. A broadly applicable method using McNemars exact test and the exact distributions of all test statistics is developed; the method controls the familywise error rate in the strong sense under minimal assumptions. A closed form (not simulation-based) algorithm for carrying out the method is provided. A bootstrap alternative is developed to account for correlation structures. Operating characteristics of these and other methods are evaluated via a simulation study. Applications to multiple comparisons of predictive models for disease classification and to postmarket surveillance of adverse events are given.


Statistics in Biopharmaceutical Research | 2015

Statistical Considerations on Subgroup Analysis in Clinical Trials

Mohamed Alosh; Kathleen Fritsch; Mohammad F. Huque; Kooros Mahjoob; Gene Pennello; Mark Rothmann; Estelle Russek-Cohen; Fraser Smith; Stephen Wilson; Lilly Q. Yue

The objective of subgroup analysis of a clinical trial is to investigate consistency or heterogeneity of the treatment effect across subgroups, defined based on background characteristics. As such, subgroup analysis plays an essential role in the interpretation of the clinical trial findings. Consistency of treatment effect across trial subgroups indicates that the average treatment effect is in general applicable regardless of the specific background characteristics. Substantial heterogeneity in treatment effect may be indicative that treatment benefit pertains only to a subset of the population. However, heterogeneity in the observed treatment effect across subgroups can arise due to chance as a result of partitioning the population into several subgroups. Furthermore, as it is known, clinical trials are generally not powered for detecting heterogeneity, thus statistical tests may miss detection of existing heterogeneity due to low power. In this article, we aim to: (i) outline the major issues underlying subgroup analysis in clinical trials and provide general statistical guidance for interpretation of its findings, (ii) provide statistical perspectives on the design and analysis of a clinical trial that aims for establishing efficacy in a targeted subgroup along with that of its overall population, and (iii) highlight some of the underlying assumptions and issues relevant to Bayesian subgroup analysis, subgroup considerations for noninferiority trials, personalized medicine, subgroup misclassification, and finally, subgroup analysis for safety assessment.


Journal of women's health and gender-based medicine | 2002

Replacement surgery and silicone gel breast implant rupture: self-report by women after mammoplasty.

S. Lori Brown; Gene Pennello

BACKGROUND This study examined the prevalence of revision surgery in which silicone gel breast implants were either removed (explanted) or replaced in a cohort of women from Birmingham, Alabama. The main reason leading up to the surgery and the prevalence of ruptured implants reported after explantation are described. METHODS Data were collected from telephone interviews with 907 women previously identified in a larger cohort study of women with breast implants. Women who reported breast surgeries subsequent to their index mammoplasty were asked to consent to retrieval of the surgical records describing the surgery. RESULTS Surgery in which a silicone gel breast implant was removed or replaced was reported by 33% of the 907 women in this cohort. The most common reason for surgery was problems with the implant that affected the breast (103 of 303 surgeries). Of the 303 women reporting surgery, 145 (48%) reported knowing after a surgery that an implant was ruptured when it was removed, and 171 (56%) reported knowing that an implant was ruptured or leaking. Overall, 16% of the 907 women reported knowing that either of their implants was ruptured after any surgery. At least one surgical record was retrieved for 165 (54%) of the 303 women reporting surgery. Among these women, the rupture rate was 69 of 165 (42%) according to the surgical record and 85 of 165 (51.5%) according to self-reports, a statistically significant difference (p = 0.008 from McNemars test). The mean time from implantation to surgery was 11.5 years among women reporting surgery and estimated at 21.4 years for all women. CONCLUSIONS A third of the women in this cohort underwent additional surgery after the initial mammoplasty, and nearly half who underwent surgery reported that their implants were found to be ruptured when removed. Women considering silicone gel breast implants should be informed of the risk of additional surgeries and of the potential risk of breast implant rupture.

Collaboration


Dive into the Gene Pennello's collaboration.

Top Co-Authors

Avatar

Brandon D. Gallas

Center for Devices and Radiological Health

View shared research outputs
Top Co-Authors

Avatar

S. Lori Brown

Center for Devices and Radiological Health

View shared research outputs
Top Co-Authors

Avatar

Kyle J. Myers

Food and Drug Administration

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Norberto Pantoja-Galicia

Center for Devices and Radiological Health

View shared research outputs
Top Co-Authors

Avatar

Rong Tang

Center for Devices and Radiological Health

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wendie A. Berg

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge