Andrew McKeown
University of Rochester
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Andrew McKeown.
Pain | 2014
Jennifer S. Gewandter; Michael P. McDermott; Andrew McKeown; Shannon M. Smith; Mark R. Williams; Matthew Hunsinger; John T. Farrar; Dennis C. Turk; Robert H. Dworkin
Summary This article reports deficiencies in reporting of missing data and methods to accommodate them, reviews methods to accommodate missing data that were recommended by statisticians and regulators, and provides recommendations for authors, reviewers, and editors pertaining to reporting of these important statistical details. ABSTRACT Missing data in clinical trials can bias estimates of treatment effects. Statisticians and government agencies recommend making every effort to minimize missing data. Although statistical methods are available to accommodate missing data, their validity depends on often untestable assumptions about why the data are missing. The objective of this study was to assess the frequency with which randomized clinical trials published in 3 major pain journals (ie, European Journal of Pain, Journal of Pain, and Pain) reported strategies to prevent missing data, the number of participants who completed the study (ie, completers), and statistical methods to accommodate missing data. A total of 161 randomized clinical trials investigating treatments for pain, published between 2006 and 2012, were included. Approximately two‐thirds of the trials reported at least 1 method that could potentially minimize missing data, the most common being allowance of concomitant medications. Only 61% of the articles explicitly reported the number of patients who were randomized and completed the trial. Although only 14 articles reported that all randomized participants completed the study, fewer than 50% of the articles reported a statistical method to accommodate missing data. Last observation carried forward imputation was used most commonly (42%). Thirteen articles reported more than 1 method to accommodate missing data; however, the majority of methods, including last observation carried forward, were not methods currently recommended by statisticians. Authors, reviewers, and editors should prioritize proper reporting of missing data and appropriate use of methods to accommodate them so as to improve the deficiencies identified in this systematic review.
Pain | 2013
Shannon M. Smith; Anthony Wang; Anthony Pereira; R. Daniel Chang; Andrew McKeown; Kaitlin Greene; Michael C. Rowbotham; Laurie B. Burke; Paul Coplan; Ian Gilron; Sharon Hertz; Nathaniel P. Katz; Allison H. Lin; Michael P. McDermott; Elektra J. Papadopoulos; Bob A. Rappaport; Michael O. Sweeney; Dennis C. Turk; Robert H. Dworkin
Summary Widespread discrepancies between registered vs published primary outcomes raise questions about whether published primary outcomes are prespecified. Recommendations are proposed to ensure the veracity of published primary outcome specifications. Abstract The National Institutes of Health released the trial registry ClinicalTrials.gov in 2000 to increase public reporting and clinical trial transparency. This systematic review examined whether registered primary outcome specifications (POS; ie, definitions, timing, and analytic plans) in analgesic treatment trials correspond with published POS. Trials with accompanying publications (n = 87) were selected from the Repository of Registered Analgesic Clinical Trials (RReACT) database of all postherpetic neuralgia, diabetic peripheral neuropathy, and fibromyalgia clinical trials registered at ClinicalTrials.gov as of December 1, 2011. POS never matched precisely; discrepancies occurred in 79% of the registry–publication pairs (21% failed to register or publish primary outcomes [PO]). These percentages did not differ significantly between industry and non‐industry‐sponsored trials. Thirty percent of the trials contained unambiguous POS discrepancies (eg, omitting a registered PO from the publication, “demoting” a registered PO to a published secondary outcome), with a statistically significantly higher percentage of non‐industry‐sponsored than industry‐sponsored trials containing unambiguous POS discrepancies. POS discrepancies due to ambiguous reporting included vaguely worded PO registration; or failing to report the timing of PO assessment, statistical analysis used for the PO, or method to address missing PO data. At best, POS discrepancies may be attributable to insufficient registry requirements, carelessness (eg, failing to report PO assessment timing), or difficulty uploading registry information. At worst, discrepancies could indicate investigator impropriety (eg, registering imprecise PO [“pain”], then publishing whichever pain assessment produced statistically significant results). Improvements in PO registration, as well as journal policies requiring consistency between registered and published PO descriptions, are needed.
Pain | 2014
Jennifer S. Gewandter; Shannon M. Smith; Andrew McKeown; Laurie B. Burke; Sharon Hertz; Matthew Hunsinger; Nathaniel P. Katz; Allison H. Lin; Michael P. McDermott; Bob A. Rappaport; Mark R. Williams; Dennis C. Turk; Robert H. Dworkin
Summary Deficiencies in reporting of primary analyses and multiplicity adjustment methods are summarized, and recommendations are provided for authors, reviewers, and editors pertaining to reporting of these important statistical details. ABSTRACT Performing multiple analyses in clinical trials can inflate the probability of a type I error, or the chance of falsely concluding a significant effect of the treatment. Strategies to minimize type I error probability include prespecification of primary analyses and statistical adjustment for multiple comparisons, when applicable. The objective of this study was to assess the quality of primary analysis reporting and frequency of multiplicity adjustment in 3 major pain journals (ie, European Journal of Pain, Journal of Pain, and PAIN®). A total of 161 randomized controlled trials investigating noninvasive pharmacological treatments or interventional treatments for pain, published between 2006 and 2012, were included. Only 52% of trials identified a primary analysis, and only 10% of trials reported prespecification of that analysis. Among the 33 articles that identified a primary analysis with multiple testing, 15 (45%) adjusted for multiplicity; of those 15, only 2 (13%) reported prespecification of the adjustment methodology. Trials in clinical pain conditions and industry‐sponsored trials identified a primary analysis more often than trials in experimental pain models and non‐industry‐sponsored trials, respectively. The results of this systematic review demonstrate deficiencies in the reporting and possibly the execution of primary analyses in published analgesic trials. These deficiencies can be rectified by changes in, or better enforcement of, journal policies pertaining to requirements for the reporting of analyses of clinical trial data.
The Journal of Pain | 2015
Shannon M. Smith; Matthew Hunsinger; Andrew McKeown; Melissa Parkhurst; Robert R. Allen; Stephen Kopko; Yun Lu; Hilary D. Wilson; Laurie B. Burke; Paul J. Desjardins; Michael P. McDermott; Bob A. Rappaport; Dennis C. Turk; Robert H. Dworkin
UNLABELLED Pain intensity assessments are used widely in human pain research, and their transparent reporting is crucial to interpreting study results. In this systematic review, we examined reporting of human pain intensity assessments and related elements (eg, administration frequency, time period assessed, type of pain) in all empirical pain studies with adult participants in 3 major pain journals (ie, European Journal of Pain, Journal of Pain, and Pain) between January 2011 and July 2012. Of the 262 articles identified, close to one-quarter (24%) ambiguously reported the pain intensity assessment. Elements related to the pain intensity assessment were frequently not reported: 31% did not identify the time period participants were asked to rate, 43% failed to report the type of pain intensity rated, and 58% did not report the specific location or pain condition rated. No differences were observed between randomized clinical trials and experimental (eg, studies involving experimental manipulation without random group assignment and blinding) and observational studies in reporting quality. The ability to understand study results, and to compare results between studies, is compromised when pain intensity assessments are not fully reported. Recommendations are presented regarding key details for investigators to consider when conducting and reporting pain intensity assessments in human adults. PERSPECTIVE This systematic review demonstrates that publications of pain research often incompletely report pain intensity assessments and their details (eg, administration frequency, type of pain). Failure to fully report details of pain intensity assessments creates ambiguity in interpreting research results. Recommendations are proposed to increase transparent reporting.
The Journal of Pain | 2015
Andrew McKeown; Jennifer S. Gewandter; Michael P. McDermott; Joseph R. Pawlowski; Joseph J. Poli; Daniel Rothstein; John T. Farrar; Ian Gilron; Nathaniel P. Katz; Allison H. Lin; Bob A. Rappaport; Michael C. Rowbotham; Dennis C. Turk; Robert H. Dworkin; Shannon M. Smith
UNLABELLED Sample size calculations determine the number of participants required to have sufficiently high power to detect a given treatment effect. In this review, we examined the reporting quality of sample size calculations in 172 publications of double-blind randomized controlled trials of noninvasive pharmacologic or interventional (ie, invasive) pain treatments published in European Journal of Pain, Journal of Pain, and Pain from January 2006 through June 2013. Sixty-five percent of publications reported a sample size calculation but only 38% provided all elements required to replicate the calculated sample size. In publications reporting at least 1 element, 54% provided a justification for the treatment effect used to calculate sample size, and 24% of studies with continuous outcome variables justified the variability estimate. Publications of clinical pain condition trials reported a sample size calculation more frequently than experimental pain model trials (77% vs 33%, P < .001) but did not differ in the frequency of reporting all required elements. No significant differences in reporting of any or all elements were detected between publications of trials with industry and nonindustry sponsorship. Twenty-eight percent included a discrepancy between the reported number of planned and randomized participants. This study suggests that sample size calculation reporting in analgesic trial publications is usually incomplete. Investigators should provide detailed accounts of sample size calculations in publications of clinical trials of pain treatments, which is necessary for reporting transparency and communication of pre-trial design decisions. PERSPECTIVE In this systematic review of analgesic clinical trials, sample size calculations and the required elements (eg, treatment effect to be detected; power level) were incompletely reported. A lack of transparency regarding sample size calculations may raise questions about the appropriateness of the calculated sample size.
The Journal of Pain | 2015
Andrew McKeown; Jennifer S. Gewandter; Michael P. McDermott; Joseph R. Pawlowski; Joseph J. Poli; Daniel Rothstein; John T. Farrar; Ian Gilron; Nathaniel P. Katz; Allison H. Lin; Bob A. Rappaport; Michael C. Rowbotham; Dennis C. Turk; Robert H. Dworkin; Shannon M. Smith
UNLABELLED Sample size calculations determine the number of participants required to have sufficiently high power to detect a given treatment effect. In this review, we examined the reporting quality of sample size calculations in 172 publications of double-blind randomized controlled trials of noninvasive pharmacologic or interventional (ie, invasive) pain treatments published in European Journal of Pain, Journal of Pain, and Pain from January 2006 through June 2013. Sixty-five percent of publications reported a sample size calculation but only 38% provided all elements required to replicate the calculated sample size. In publications reporting at least 1 element, 54% provided a justification for the treatment effect used to calculate sample size, and 24% of studies with continuous outcome variables justified the variability estimate. Publications of clinical pain condition trials reported a sample size calculation more frequently than experimental pain model trials (77% vs 33%, P < .001) but did not differ in the frequency of reporting all required elements. No significant differences in reporting of any or all elements were detected between publications of trials with industry and nonindustry sponsorship. Twenty-eight percent included a discrepancy between the reported number of planned and randomized participants. This study suggests that sample size calculation reporting in analgesic trial publications is usually incomplete. Investigators should provide detailed accounts of sample size calculations in publications of clinical trials of pain treatments, which is necessary for reporting transparency and communication of pre-trial design decisions. PERSPECTIVE In this systematic review of analgesic clinical trials, sample size calculations and the required elements (eg, treatment effect to be detected; power level) were incompletely reported. A lack of transparency regarding sample size calculations may raise questions about the appropriateness of the calculated sample size.
Pain | 2014
Matthew Hunsinger; Shannon M. Smith; Daniel Rothstein; Andrew McKeown; Melissa Parkhurst; Sharon Hertz; Nathaniel P. Katz; Allison H. Lin; Michael P. McDermott; Bob A. Rappaport; Dennis C. Turk; Robert H. Dworkin
&NA; The results of this systematic review suggest that adverse event reporting in trials examining nonpharmacologic, noninterventional pain treatments needs to improve. &NA; Assessment of treatment safety is 1 of the primary goals of clinical trials. Organizations and working groups have created reporting guidelines for adverse events (AEs). Previous research examining AE reporting for pharmacologic clinical trials of analgesics in major pain journals found many reporting inadequacies, suggesting that analgesic trials are not adhering to existing AE reporting guidelines. The present systematic review documented AE reporting in 3 main pain journals for nonpharmacologic, noninterventional (NP/NI) trials examining pain treatments. To broaden our pool of nonpharmacologic trials, we also included trials examining acupuncture, leech therapy, and noninvasive stimulation techniques (eg, transcutaneous electrical nerve stimulation). We documented AE reporting at 2 levels of specificity using coding manuals based on the Consolidated Standards of Reporting Trials (CONSORT) harms reporting standards and Analgesic, Anesthetic, and Addiction Clinical Trial Translations, Innovations, Opportunities, and Networks (ACTTION) AE reporting checklist. We identified a number of inadequacies in AE reporting across the 3 journals. For example, using the ACTTION coding manual, we found that less than one‐half of the trials reported specific AE assessment methods; approximately one‐third of the trials reported withdrawals due to AEs for each study arm; and about one‐fourth of the trials reported all specific AEs. We also examined differences in AE reporting across several trial characteristics, finding that AE reporting was generally more detailed in trials with patients versus those using healthy volunteers undergoing experimentally evoked pain. These results suggest that investigators conducting and reporting NP/NI clinical trials are not adequately describing the assessment and occurrence of AEs.
Journal of Clinical Epidemiology | 2016
Jordan D. Dworkin; Andrew McKeown; John T. Farrar; Ian Gilron; Matthew Hunsinger; Robert D. Kerns; Michael P. McDermott; Bob A. Rappaport; Dennis C. Turk; Robert H. Dworkin; Jennifer S. Gewandter
OBJECTIVE The goal of this study was to assess the quality of reporting of statistical methods in randomized clinical trials (RCTs), including identification of primary analyses, missing data accommodation, and multiplicity adjustment, in studies of nonpharmacologic, noninterventional pain treatments (e.g., physical therapy, cognitive behavioral therapy, acupuncture, and massage). STUDY DESIGN Systematic review of 101 articles reporting RCTs of pain treatments that were published between January 2006 and June 2013 in the European Journal of Pain, the Journal of Pain, and Pain. SETTING Systematic review. RESULTS Sixty-two percent of studies identified a primary outcome variable, 46% identified a primary analysis, and of those with multiple primary analyses, only 21% adjusted for multiplicity. Slightly over half (55%) of studies reported using at least one method to accommodate missing data. Only four studies reported prespecifying at least one of these four study methods. CONCLUSION This review identified deficiencies in the reporting of primary analyses and methods to adjust for multiplicity and accommodate missing data in articles disseminating results of nonpharmacologic, noninterventional trials. Investigators should be encouraged to indicate whether their analyses were prespecified and to clearly and completely report statistical methods in clinical trial publications to maximize the interpretability of trial results.
Pain | 2014
Jennifer S. Gewandter; Michael P. McDermott; Andrew McKeown; Shannon M. Smith; Joseph R. Pawlowski; Joseph J. Poli; Daniel Rothstein; Mark R. Williams; Shay Bujanover; John T. Farrar; Ian Gilron; Nathaniel P. Katz; Michael C. Rowbotham; Dennis C. Turk; Robert H. Dworkin
Summary “Intention‐to‐treat” is a term frequently used to describe analyses that exclude randomized participants. Recommendations to use the term “intention‐to‐treat” consistently to facilitate interpretation of randomized controlled trials are provided. ABSTRACT The intention‐to‐treat (ITT) principle states that all subjects in a randomized clinical trial (RCT) should be analyzed in the group to which they were assigned, regardless of compliance with assigned treatment. Analyses performed according to the ITT principle preserve the benefits of randomization and are recommended by regulators and statisticians for analyses of RCTs. The objective of this study was to determine the frequency with which publications of analgesic RCTs in 3 major pain journals report an ITT analysis and the percentage of the author‐declared ITT analyses that include all randomized subjects and thereby fulfill the most common interpretation of the ITT principle. RCTs investigating noninvasive, pharmacologic and interventional (eg, nerve blocks, implantable pumps, spinal cord stimulators, surgery) treatments for pain, published between January 2006 and June 2013 (n = 173), were included. None of the trials using experimental pain models reported an ITT analysis; 47% of trials investigating clinical pain conditions reported an ITT analysis, and 5% reported a modified ITT analysis. Of the analyses reported as ITT, 67% reported reasons for excluding subjects from the analysis, and 18% of those listing reasons for exclusion did not do so in the Methods section. Such mislabeling can make it difficult to identify traditional ITT analyses for inclusion in meta‐analyses. We hope that deficiencies in reporting identified in this study will encourage authors, reviewers, and editors to promote more consistent use of the term “intention to treat” for more accurate reporting of RCT‐based evidence for pain treatments.
Anesthesia & Analgesia | 2016
Mark R. Williams; Andrew McKeown; Franklin Dexter; James R. Miner; Daniel I. Sessler; John J. Vargo; Dennis C. Turk; Robert H. Dworkin
Successful procedural sedation represents a spectrum of patient- and clinician-related goals. The absence of a gold-standard measure of the efficacy of procedural sedation has led to a variety of outcomes being used in clinical trials, with the consequent lack of consistency among measures, making comparisons among trials and meta-analyses challenging. We evaluated which existing measures have undergone psychometric analysis in a procedural sedation setting and whether the validity of any of these measures support their use across the range of procedures for which sedation is indicated. Numerous measures were found to have been used in clinical research on procedural sedation across a wide range of procedures. However, reliability and validity have been evaluated for only a limited number of sedation scales, observer-rated pain/discomfort scales, and satisfaction measures in only a few categories of procedures. Typically, studies only examined 1 or 2 aspects of scale validity. The results are likely unique to the specific clinical settings they were tested in. Certain scales, for example, those requiring motor stimulation, are unsuitable to evaluate sedation for procedures where movement is prohibited (e.g., magnetic resonance imaging scans). Further work is required to evaluate existing measures for procedures for which they were not developed. Depending on the outcomes of these efforts, it might ultimately be necessary to consider measures of sedation efficacy to be procedure specific.