Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Allison H. Lin is active.

Publication


Featured researches published by Allison H. Lin.


Pain | 2012

Adherence to CONSORT harms-reporting recommendations in publications of recent analgesic clinical trials: an ACTTION systematic review.

Shannon M. Smith; R. Daniel Chang; Anthony Pereira; Nirupa Shah; Ian Gilron; Nathaniel P. Katz; Allison H. Lin; Michael P. McDermott; Bob A. Rappaport; Michael C. Rowbotham; Cristina Sampaio; Dennis C. Turk; Robert H. Dworkin

Summary Harms reporting in analgesic trials of pharmacologic treatments has improved modestly since the 2004 CONSORT extension, although improvement may be related to other study factors. ABSTRACT Recommendations for harms (ie, adverse events) reporting in randomized clinical trial publications were presented in a 2004 extension of the Consolidated Standards of Reporting Trials (CONSORT) statement. Our objectives were to assess harms reporting in 3 major pain journals (European Journal of Pain, Journal of Pain, and PAIN®) to determine whether harms reporting improved following publication of the 2004 CONSORT recommendations, and to examine study factors associated with adequacy of harms reporting. A total of 101 randomized, double‐blind, noninvasive pharmacologic trials were identified in the 2000–2003 (epoch 1) and 2008–2011 (epoch 2) issues of these journals. Out of 10 reporting recommendations, the mean number fulfilled was 6.08 (SD 2.65). Although more harms recommendations were fulfilled in epoch 2 (m2 = 6.49, SD 2.66) than in epoch 1 (m1 = 5.39, SD 2.52; P = 0.04), only the recommendation to report harms per arm was satisfied by >90% of trials in epoch 2, whereas <60% reported withdrawals due to harms. Several trial characteristics (study design, participant type, pain type, frequency of treatment administration, treatment administration method, sponsor, and number of randomized participants) were significantly associated with harms reporting. However, when trial characteristics and epoch were entered into a multiple regression analysis, only trials studying pain patients, those using oral treatments, and industry‐sponsored trials were associated with better harms reporting. Despite some improvement in harms reporting, greater improvement is needed to provide informative, consistent reporting of adverse events and safety in analgesic clinical trials.


Pain | 2013

Adverse event assessment, analysis, and reporting in recent published analgesic clinical trials: ACTTION systematic review and recommendations

Shannon M. Smith; Anthony Wang; Nathaniel P. Katz; Michael P. McDermott; Laurie B. Burke; Paul Coplan; Ian Gilron; Sharon Hertz; Allison H. Lin; Bob A. Rappaport; Michael C. Rowbotham; Cristina Sampaio; Michael O. Sweeney; Dennis C. Turk; Robert H. Dworkin

&NA; Adverse events in randomized controlled trials of noninvasive, pharmacologic analgesics are frequently incompletely or inconsistently reported. A comprehensive reporting checklist is proposed to improve disclosure of adverse effects. &NA; The development of valid and informative treatment risk–benefit profiles requires consistent and thorough information about adverse event (AE) assessment and participants’ AEs during randomized controlled trials (RCTs). Despite a 2004 extension of the Consolidated Standards of Reporting Trials (CONSORT) statement recommending the specific AE information that investigators should report, there is little evidence that analgesic RCTs adequately adhere to these recommendations. This systematic review builds on prior recommendations by describing a comprehensive checklist for AE reporting developed to capture clinically important AE information. Using this checklist, we coded AE assessment methods and reporting in all 80 double‐blind RCTs of noninvasive pharmacologic treatments published in the European Journal of Pain, Journal of Pain, and PAIN® from 2006 to 2011. Across all trials, reports of AEs were frequently incomplete, inconsistent across trials, and, in some cases, missing. For example, >40% of trials failed to report any information on serious adverse events. Trials of participants with acute or chronic pain conditions and industry‐sponsored trials typically provided more and better‐quality AE data than trials involving pain‐free volunteers or trials that were not industry sponsored. The results of this review suggest that improved AE reporting is needed in analgesic RCTs. We developed an ACTTION (Analgesic, Anesthetic, and Addiction Clinical Trial Translations, Innovations, Opportunities, and Networks) AE reporting checklist that is intended to assist investigators in thoroughly and consistently capturing and reporting these critically important data in publications.


Pain | 2013

Discrepancies between registered and published primary outcome specifications in analgesic trials: ACTTION systematic review and recommendations.

Shannon M. Smith; Anthony Wang; Anthony Pereira; R. Daniel Chang; Andrew McKeown; Kaitlin Greene; Michael C. Rowbotham; Laurie B. Burke; Paul Coplan; Ian Gilron; Sharon Hertz; Nathaniel P. Katz; Allison H. Lin; Michael P. McDermott; Elektra J. Papadopoulos; Bob A. Rappaport; Michael O. Sweeney; Dennis C. Turk; Robert H. Dworkin

Summary Widespread discrepancies between registered vs published primary outcomes raise questions about whether published primary outcomes are prespecified. Recommendations are proposed to ensure the veracity of published primary outcome specifications. Abstract The National Institutes of Health released the trial registry ClinicalTrials.gov in 2000 to increase public reporting and clinical trial transparency. This systematic review examined whether registered primary outcome specifications (POS; ie, definitions, timing, and analytic plans) in analgesic treatment trials correspond with published POS. Trials with accompanying publications (n = 87) were selected from the Repository of Registered Analgesic Clinical Trials (RReACT) database of all postherpetic neuralgia, diabetic peripheral neuropathy, and fibromyalgia clinical trials registered at ClinicalTrials.gov as of December 1, 2011. POS never matched precisely; discrepancies occurred in 79% of the registry–publication pairs (21% failed to register or publish primary outcomes [PO]). These percentages did not differ significantly between industry and non‐industry‐sponsored trials. Thirty percent of the trials contained unambiguous POS discrepancies (eg, omitting a registered PO from the publication, “demoting” a registered PO to a published secondary outcome), with a statistically significantly higher percentage of non‐industry‐sponsored than industry‐sponsored trials containing unambiguous POS discrepancies. POS discrepancies due to ambiguous reporting included vaguely worded PO registration; or failing to report the timing of PO assessment, statistical analysis used for the PO, or method to address missing PO data. At best, POS discrepancies may be attributable to insufficient registry requirements, carelessness (eg, failing to report PO assessment timing), or difficulty uploading registry information. At worst, discrepancies could indicate investigator impropriety (eg, registering imprecise PO [“pain”], then publishing whichever pain assessment produced statistically significant results). Improvements in PO registration, as well as journal policies requiring consistency between registered and published PO descriptions, are needed.


Neurology | 2013

Assay sensitivity and study features in neuropathic pain trials An ACTTION meta-analysis

Robert H. Dworkin; Dennis C. Turk; Sarah Peirce-Sandner; Hua He; Michael P. McDermott; John T. Farrar; Nathaniel P. Katz; Allison H. Lin; Bob A. Rappaport; Michael C. Rowbotham

Objective: Our objective was to identify patient, study, and site factors associated with assay sensitivity in placebo-controlled neuropathic pain trials. Methods: We examined the associations between study characteristics and standardized effect size (SES) in a database of 200 publicly available randomized clinical trials of pharmacologic treatments for neuropathic pain. Results: There was considerable heterogeneity in the SESs among the examined trials. Univariate meta-regression analyses indicated that larger SESs were significantly associated with trials that had 1) greater minimum baseline pain inclusion criteria, 2) greater mean subject age, 3) a larger percentage of Caucasian subjects, and 4) a smaller total number of subjects. In a multiple meta-regression analysis, the associations between SES and minimum baseline pain inclusion criterion and age remained significant. Conclusions: Our analyses have examined potentially modifiable correlates of study SES and shown that a minimum pain inclusion criterion of 40 or above on a 0 to 100 scale is associated with a larger SES. These data provide a foundation for investigating strategies to improve assay sensitivity and thereby decrease the likelihood of falsely negative outcomes in clinical trials of efficacious treatments for neuropathic pain.


Pain | 2014

Reporting of primary analyses and multiplicity adjustment in recent analgesic clinical trials: ACTTION systematic review and recommendations

Jennifer S. Gewandter; Shannon M. Smith; Andrew McKeown; Laurie B. Burke; Sharon Hertz; Matthew Hunsinger; Nathaniel P. Katz; Allison H. Lin; Michael P. McDermott; Bob A. Rappaport; Mark R. Williams; Dennis C. Turk; Robert H. Dworkin

Summary Deficiencies in reporting of primary analyses and multiplicity adjustment methods are summarized, and recommendations are provided for authors, reviewers, and editors pertaining to reporting of these important statistical details. ABSTRACT Performing multiple analyses in clinical trials can inflate the probability of a type I error, or the chance of falsely concluding a significant effect of the treatment. Strategies to minimize type I error probability include prespecification of primary analyses and statistical adjustment for multiple comparisons, when applicable. The objective of this study was to assess the quality of primary analysis reporting and frequency of multiplicity adjustment in 3 major pain journals (ie, European Journal of Pain, Journal of Pain, and PAIN®). A total of 161 randomized controlled trials investigating noninvasive pharmacological treatments or interventional treatments for pain, published between 2006 and 2012, were included. Only 52% of trials identified a primary analysis, and only 10% of trials reported prespecification of that analysis. Among the 33 articles that identified a primary analysis with multiple testing, 15 (45%) adjusted for multiplicity; of those 15, only 2 (13%) reported prespecification of the adjustment methodology. Trials in clinical pain conditions and industry‐sponsored trials identified a primary analysis more often than trials in experimental pain models and non‐industry‐sponsored trials, respectively. The results of this systematic review demonstrate deficiencies in the reporting and possibly the execution of primary analyses in published analgesic trials. These deficiencies can be rectified by changes in, or better enforcement of, journal policies pertaining to requirements for the reporting of analyses of clinical trial data.


The Journal of Pain | 2015

Reporting of Sample Size Calculations in Analgesic Clinical Trials: ACTTION Systematic Review

Andrew McKeown; Jennifer S. Gewandter; Michael P. McDermott; Joseph R. Pawlowski; Joseph J. Poli; Daniel Rothstein; John T. Farrar; Ian Gilron; Nathaniel P. Katz; Allison H. Lin; Bob A. Rappaport; Michael C. Rowbotham; Dennis C. Turk; Robert H. Dworkin; Shannon M. Smith

UNLABELLED Sample size calculations determine the number of participants required to have sufficiently high power to detect a given treatment effect. In this review, we examined the reporting quality of sample size calculations in 172 publications of double-blind randomized controlled trials of noninvasive pharmacologic or interventional (ie, invasive) pain treatments published in European Journal of Pain, Journal of Pain, and Pain from January 2006 through June 2013. Sixty-five percent of publications reported a sample size calculation but only 38% provided all elements required to replicate the calculated sample size. In publications reporting at least 1 element, 54% provided a justification for the treatment effect used to calculate sample size, and 24% of studies with continuous outcome variables justified the variability estimate. Publications of clinical pain condition trials reported a sample size calculation more frequently than experimental pain model trials (77% vs 33%, P < .001) but did not differ in the frequency of reporting all required elements. No significant differences in reporting of any or all elements were detected between publications of trials with industry and nonindustry sponsorship. Twenty-eight percent included a discrepancy between the reported number of planned and randomized participants. This study suggests that sample size calculation reporting in analgesic trial publications is usually incomplete. Investigators should provide detailed accounts of sample size calculations in publications of clinical trials of pain treatments, which is necessary for reporting transparency and communication of pre-trial design decisions. PERSPECTIVE In this systematic review of analgesic clinical trials, sample size calculations and the required elements (eg, treatment effect to be detected; power level) were incompletely reported. A lack of transparency regarding sample size calculations may raise questions about the appropriateness of the calculated sample size.


The Journal of Pain | 2015

Critical ReviewReporting of Sample Size Calculations in Analgesic Clinical Trials: ACTTION Systematic Review

Andrew McKeown; Jennifer S. Gewandter; Michael P. McDermott; Joseph R. Pawlowski; Joseph J. Poli; Daniel Rothstein; John T. Farrar; Ian Gilron; Nathaniel P. Katz; Allison H. Lin; Bob A. Rappaport; Michael C. Rowbotham; Dennis C. Turk; Robert H. Dworkin; Shannon M. Smith

UNLABELLED Sample size calculations determine the number of participants required to have sufficiently high power to detect a given treatment effect. In this review, we examined the reporting quality of sample size calculations in 172 publications of double-blind randomized controlled trials of noninvasive pharmacologic or interventional (ie, invasive) pain treatments published in European Journal of Pain, Journal of Pain, and Pain from January 2006 through June 2013. Sixty-five percent of publications reported a sample size calculation but only 38% provided all elements required to replicate the calculated sample size. In publications reporting at least 1 element, 54% provided a justification for the treatment effect used to calculate sample size, and 24% of studies with continuous outcome variables justified the variability estimate. Publications of clinical pain condition trials reported a sample size calculation more frequently than experimental pain model trials (77% vs 33%, P < .001) but did not differ in the frequency of reporting all required elements. No significant differences in reporting of any or all elements were detected between publications of trials with industry and nonindustry sponsorship. Twenty-eight percent included a discrepancy between the reported number of planned and randomized participants. This study suggests that sample size calculation reporting in analgesic trial publications is usually incomplete. Investigators should provide detailed accounts of sample size calculations in publications of clinical trials of pain treatments, which is necessary for reporting transparency and communication of pre-trial design decisions. PERSPECTIVE In this systematic review of analgesic clinical trials, sample size calculations and the required elements (eg, treatment effect to be detected; power level) were incompletely reported. A lack of transparency regarding sample size calculations may raise questions about the appropriateness of the calculated sample size.


Pain | 2014

adverse event reporting in nonpharmacologic, noninterventional pain clinical trials: Acttion systematic review

Matthew Hunsinger; Shannon M. Smith; Daniel Rothstein; Andrew McKeown; Melissa Parkhurst; Sharon Hertz; Nathaniel P. Katz; Allison H. Lin; Michael P. McDermott; Bob A. Rappaport; Dennis C. Turk; Robert H. Dworkin

&NA; The results of this systematic review suggest that adverse event reporting in trials examining nonpharmacologic, noninterventional pain treatments needs to improve. &NA; Assessment of treatment safety is 1 of the primary goals of clinical trials. Organizations and working groups have created reporting guidelines for adverse events (AEs). Previous research examining AE reporting for pharmacologic clinical trials of analgesics in major pain journals found many reporting inadequacies, suggesting that analgesic trials are not adhering to existing AE reporting guidelines. The present systematic review documented AE reporting in 3 main pain journals for nonpharmacologic, noninterventional (NP/NI) trials examining pain treatments. To broaden our pool of nonpharmacologic trials, we also included trials examining acupuncture, leech therapy, and noninvasive stimulation techniques (eg, transcutaneous electrical nerve stimulation). We documented AE reporting at 2 levels of specificity using coding manuals based on the Consolidated Standards of Reporting Trials (CONSORT) harms reporting standards and Analgesic, Anesthetic, and Addiction Clinical Trial Translations, Innovations, Opportunities, and Networks (ACTTION) AE reporting checklist. We identified a number of inadequacies in AE reporting across the 3 journals. For example, using the ACTTION coding manual, we found that less than one‐half of the trials reported specific AE assessment methods; approximately one‐third of the trials reported withdrawals due to AEs for each study arm; and about one‐fourth of the trials reported all specific AEs. We also examined differences in AE reporting across several trial characteristics, finding that AE reporting was generally more detailed in trials with patients versus those using healthy volunteers undergoing experimentally evoked pain. These results suggest that investigators conducting and reporting NP/NI clinical trials are not adequately describing the assessment and occurrence of AEs.


Arthritis & Rheumatism | 2014

Meta-analysis of assay sensitivity and study features in clinical trials of pharmacologic treatments for osteoarthritis pain

Robert H. Dworkin; Dennis C. Turk; Sarah Peirce-Sandner; Hua He; Michael P. McDermott; Marc C. Hochberg; Joanne M. Jordan; Nathaniel P. Katz; Allison H. Lin; Tuhina Neogi; Bob A. Rappaport; Lee S. Simon; Vibeke Strand

To identify patient, study, and site factors associated with assay sensitivity in clinical trials of pharmacologic treatments for osteoarthritis (OA) pain.


Pain | 2014

Disclosure of authorship contributions in analgesic clinical trials and related publications: ACTTION systematic review and recommendations.

Matthew Hunsinger; Shannon M. Smith; Andrew McKeown; Melissa Parkhurst; Robert A. Gross; Allison H. Lin; Michael P. McDermott; Bob A. Rappaport; Dennis C. Turk; Robert H. Dworkin

0304-3959/

Collaboration


Dive into the Allison H. Lin's collaboration.

Top Co-Authors

Avatar

Bob A. Rappaport

Food and Drug Administration

View shared research outputs
Top Co-Authors

Avatar

Dennis C. Turk

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael C. Rowbotham

California Pacific Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sharon Hertz

Food and Drug Administration

View shared research outputs
Researchain Logo
Decentralizing Knowledge