Mark R. Williams
University of Rochester
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mark R. Williams.
Pain | 2014
Jennifer S. Gewandter; Michael P. McDermott; Andrew McKeown; Shannon M. Smith; Mark R. Williams; Matthew Hunsinger; John T. Farrar; Dennis C. Turk; Robert H. Dworkin
Summary This article reports deficiencies in reporting of missing data and methods to accommodate them, reviews methods to accommodate missing data that were recommended by statisticians and regulators, and provides recommendations for authors, reviewers, and editors pertaining to reporting of these important statistical details. ABSTRACT Missing data in clinical trials can bias estimates of treatment effects. Statisticians and government agencies recommend making every effort to minimize missing data. Although statistical methods are available to accommodate missing data, their validity depends on often untestable assumptions about why the data are missing. The objective of this study was to assess the frequency with which randomized clinical trials published in 3 major pain journals (ie, European Journal of Pain, Journal of Pain, and Pain) reported strategies to prevent missing data, the number of participants who completed the study (ie, completers), and statistical methods to accommodate missing data. A total of 161 randomized clinical trials investigating treatments for pain, published between 2006 and 2012, were included. Approximately two‐thirds of the trials reported at least 1 method that could potentially minimize missing data, the most common being allowance of concomitant medications. Only 61% of the articles explicitly reported the number of patients who were randomized and completed the trial. Although only 14 articles reported that all randomized participants completed the study, fewer than 50% of the articles reported a statistical method to accommodate missing data. Last observation carried forward imputation was used most commonly (42%). Thirteen articles reported more than 1 method to accommodate missing data; however, the majority of methods, including last observation carried forward, were not methods currently recommended by statisticians. Authors, reviewers, and editors should prioritize proper reporting of missing data and appropriate use of methods to accommodate them so as to improve the deficiencies identified in this systematic review.
Pain | 2014
Jennifer S. Gewandter; Shannon M. Smith; Andrew McKeown; Laurie B. Burke; Sharon Hertz; Matthew Hunsinger; Nathaniel P. Katz; Allison H. Lin; Michael P. McDermott; Bob A. Rappaport; Mark R. Williams; Dennis C. Turk; Robert H. Dworkin
Summary Deficiencies in reporting of primary analyses and multiplicity adjustment methods are summarized, and recommendations are provided for authors, reviewers, and editors pertaining to reporting of these important statistical details. ABSTRACT Performing multiple analyses in clinical trials can inflate the probability of a type I error, or the chance of falsely concluding a significant effect of the treatment. Strategies to minimize type I error probability include prespecification of primary analyses and statistical adjustment for multiple comparisons, when applicable. The objective of this study was to assess the quality of primary analysis reporting and frequency of multiplicity adjustment in 3 major pain journals (ie, European Journal of Pain, Journal of Pain, and PAIN®). A total of 161 randomized controlled trials investigating noninvasive pharmacological treatments or interventional treatments for pain, published between 2006 and 2012, were included. Only 52% of trials identified a primary analysis, and only 10% of trials reported prespecification of that analysis. Among the 33 articles that identified a primary analysis with multiple testing, 15 (45%) adjusted for multiplicity; of those 15, only 2 (13%) reported prespecification of the adjustment methodology. Trials in clinical pain conditions and industry‐sponsored trials identified a primary analysis more often than trials in experimental pain models and non‐industry‐sponsored trials, respectively. The results of this systematic review demonstrate deficiencies in the reporting and possibly the execution of primary analyses in published analgesic trials. These deficiencies can be rectified by changes in, or better enforcement of, journal policies pertaining to requirements for the reporting of analyses of clinical trial data.
Pain | 2016
Shannon M. Smith; Dagmar Amtmann; Robert L. Askew; Jennifer S. Gewandter; Matthew Hunsinger; Mark P. Jensen; Michael P. McDermott; Kushang V. Patel; Mark R. Williams; Bacci Ed; Burke Lb; Chambers Ct; Stephen A. Cooper; Penny Cowan; Paul J. Desjardins; Mila Etropolski; John T. Farrar; Ian Gilron; Huang Iz; Katz M; Robert D. Kerns; Ernest A. Kopecky; Bob A. Rappaport; Malca Resnick; Geertrui F. Vanhove; Veasley C; Mark Versavel; Ajay D. Wasan; Dennis C. Turk; Robert H. Dworkin
Abstract Clinical trial participants often require additional instruction to prevent idiosyncratic interpretations regarding completion of patient-reported outcomes. The Analgesic, Anesthetic, and Addiction Clinical Trial Translations, Innovations, Opportunities, and Networks (ACTTION) public–private partnership developed a training system with specific, standardized guidance regarding daily average pain intensity ratings. A 3-week exploratory study among participants with low-back pain, osteoarthritis of the knee or hip, and painful diabetic peripheral neuropathy was conducted, randomly assigning participants to 1 of 3 groups: training with human pain assessment (T+); training with automated pain assessment (T); or no training with automated pain assessment (C). Although most measures of validity and reliability did not reveal significant differences between groups, some benefit was observed in discriminant validity, amount of missing data, and ranking order of least, worst, and average pain intensity ratings for participants in Group T+ compared with the other groups. Prediction of greater reliability in average pain intensity ratings in Group T+ compared with the other groups was not supported, which might indicate that training produces ratings that reflect the reality of temporal pain fluctuations. Results of this novel study suggest the need to test the training system in a prospective analgesic treatment trial.
Pain | 2014
Jeffrey H. Zimering; Mark R. Williams; Maria E. Eiras; Brian A. Fallon; Eric L. Logigian; Robert H. Dworkin
http://dx.doi.org/10.1016/j.pain.2014.04.024 0304-3959/ 2014 International Association for the Study of Pain. Published by Elsevier B.V. All rights reserved. ⇑ Corresponding author. Address: Department of Anesthesiology, 601 Elmwood Avenue, Box 604, Rochester, NY 14642. Tel.: +1 585 275 8214; fax: +1 585 244 7271. E-mail address: [email protected] (R.H. Dworkin). Jeffrey H. Zimering , Mark R. Williams , Maria E. Eiras , Brian A. Fallon , Eric L. Logigian , Robert H. Dworkin b,⇑
Anesthesia & Analgesia | 2017
Mark R. Williams; Denham S. Ward; Douglas W. Carlson; Joseph P. Cravero; Franklin Dexter; Jenifer R. Lightdale; Keira P. Mason; James R. Miner; John J. Vargo; John W. Berkenbosch; Randall M. Clark; Isabelle Constant; Raymond A. Dionne; Robert H. Dworkin; David Gozal; David Grayzel; Michael G. Irwin; Jerrold Lerman; Robert E. O’Connor; Pratik P. Pandharipande; Bob A. Rappaport; Richard R. Riker; Joseph R. Tobin; Dennis C. Turk; Rebecca S. Twersky; Daniel I. Sessler
The Sedation Consortium on Endpoints and Procedures for Treatment, Education, and Research, established by the Analgesic, Anesthetic, and Addiction Clinical Trial Translations, Innovations, Opportunities, and Networks public–private partnership with the US Food and Drug Administration, convened a meeting of sedation experts from a variety of clinical specialties and research backgrounds with the objective of developing recommendations for procedural sedation research. Four core outcome domains were recommended for consideration in sedation clinical trials: (1) safety, (2) efficacy, (3) patient-centered and/or family-centered outcomes, and (4) efficiency. This meeting identified core outcome measures within the efficacy and patient-centered and/or family-centered domains. Safety will be addressed in a subsequent meeting, and efficiency will not be addressed at this time. These measures encompass depth and levels of sedation, proceduralist and patient satisfaction, patient recall, and degree of pain experienced. Consistent use of the recommended outcome measures will facilitate the comprehensive reporting across sedation trials, along with meaningful comparisons among studies and interventions in systematic reviews and meta-analyses.
Pain | 2014
Jennifer S. Gewandter; Michael P. McDermott; Andrew McKeown; Shannon M. Smith; Joseph R. Pawlowski; Joseph J. Poli; Daniel Rothstein; Mark R. Williams; Shay Bujanover; John T. Farrar; Ian Gilron; Nathaniel P. Katz; Michael C. Rowbotham; Dennis C. Turk; Robert H. Dworkin
Summary “Intention‐to‐treat” is a term frequently used to describe analyses that exclude randomized participants. Recommendations to use the term “intention‐to‐treat” consistently to facilitate interpretation of randomized controlled trials are provided. ABSTRACT The intention‐to‐treat (ITT) principle states that all subjects in a randomized clinical trial (RCT) should be analyzed in the group to which they were assigned, regardless of compliance with assigned treatment. Analyses performed according to the ITT principle preserve the benefits of randomization and are recommended by regulators and statisticians for analyses of RCTs. The objective of this study was to determine the frequency with which publications of analgesic RCTs in 3 major pain journals report an ITT analysis and the percentage of the author‐declared ITT analyses that include all randomized subjects and thereby fulfill the most common interpretation of the ITT principle. RCTs investigating noninvasive, pharmacologic and interventional (eg, nerve blocks, implantable pumps, spinal cord stimulators, surgery) treatments for pain, published between January 2006 and June 2013 (n = 173), were included. None of the trials using experimental pain models reported an ITT analysis; 47% of trials investigating clinical pain conditions reported an ITT analysis, and 5% reported a modified ITT analysis. Of the analyses reported as ITT, 67% reported reasons for excluding subjects from the analysis, and 18% of those listing reasons for exclusion did not do so in the Methods section. Such mislabeling can make it difficult to identify traditional ITT analyses for inclusion in meta‐analyses. We hope that deficiencies in reporting identified in this study will encourage authors, reviewers, and editors to promote more consistent use of the term “intention to treat” for more accurate reporting of RCT‐based evidence for pain treatments.
Anesthesia & Analgesia | 2016
Mark R. Williams; Andrew McKeown; Franklin Dexter; James R. Miner; Daniel I. Sessler; John J. Vargo; Dennis C. Turk; Robert H. Dworkin
Successful procedural sedation represents a spectrum of patient- and clinician-related goals. The absence of a gold-standard measure of the efficacy of procedural sedation has led to a variety of outcomes being used in clinical trials, with the consequent lack of consistency among measures, making comparisons among trials and meta-analyses challenging. We evaluated which existing measures have undergone psychometric analysis in a procedural sedation setting and whether the validity of any of these measures support their use across the range of procedures for which sedation is indicated. Numerous measures were found to have been used in clinical research on procedural sedation across a wide range of procedures. However, reliability and validity have been evaluated for only a limited number of sedation scales, observer-rated pain/discomfort scales, and satisfaction measures in only a few categories of procedures. Typically, studies only examined 1 or 2 aspects of scale validity. The results are likely unique to the specific clinical settings they were tested in. Certain scales, for example, those requiring motor stimulation, are unsuitable to evaluate sedation for procedures where movement is prohibited (e.g., magnetic resonance imaging scans). Further work is required to evaluate existing measures for procedures for which they were not developed. Depending on the outcomes of these efforts, it might ultimately be necessary to consider measures of sedation efficacy to be procedure specific.
Anesthesia & Analgesia | 2017
Mark R. Williams; Michael Nayshtut; Amie Hoefnagel; Andrew McKeown; Douglas W. Carlson; Joseph P. Cravero; Jenifer R. Lightdale; Keira P. Mason; Stephen Wilson; Dennis C. Turk; Robert H. Dworkin; Denham S. Ward
Objective evaluations comparing different techniques and approaches to pediatric procedural sedation studies have been limited by a lack of consistency among the outcome measures used in assessment. This study reviewed those existing measures, which have undergone psychometric analysis in a pediatric procedural sedation setting, to determine to what extent and in what circumstances their use is justified across the spectrum of procedures, age groups, and techniques. The results of our study suggest that a wide range of measures has been used to assess the efficacy and effectiveness of pediatric procedural sedation. Most lack the evidence of validity and reliability that is necessary to facilitate rigorous clinical trial design, as well as the evaluation of new drugs and devices. A set of core pediatric sedation outcome domains and outcome measures can be developed on the basis of our findings. We believe that consensus among all stakeholders regarding appropriate domains and measures to evaluate pediatric procedural sedation is possible and that widespread implementation of such recommendations should be pursued.
The Journal of Pain | 2015
Jennifer S. Gewandter; Andrew McKeown; Michael P. McDermott; Jordan D. Dworkin; Shannon M. Smith; Robert A. Gross; Matthew Hunsinger; Allison H. Lin; Bob A. Rappaport; Andrew S.C. Rice; Michael C. Rowbotham; Mark R. Williams; Dennis C. Turk; Robert H. Dworkin
The Journal of Pain | 2014
Jennifer S. Gewandter; Shannon M. Smith; Andrew McKeown; Laurie B. Burke; Sharon Hertz; Matthew Hunsinger; Nathaniel P. Katz; Allison H. Lin; Michael P. McDermott; Bob A. Rappaport; Mark R. Williams; Dennis C. Turk; Robert H. Dworkin