Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where An-Wen Chan is active.

Publication


Featured researches published by An-Wen Chan.


BMJ | 2014

Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide

Tammy Hoffmann; Paul Glasziou; Isabelle Boutron; Ruairidh Milne; Rafael Perera; David Moher; Douglas G. Altman; Virginia Barbour; Helen Macdonald; Marie Johnston; Sarah E Lamb; Mary Dixon-Woods; Peter McCulloch; Jeremy C. Wyatt; An-Wen Chan; Susan Michie

Without a complete published description of interventions, clinicians and patients cannot reliably implement interventions that are shown to be useful, and other researchers cannot replicate or build on research findings. The quality of description of interventions in publications, however, is remarkably poor. To improve the completeness of reporting, and ultimately the replicability, of interventions, an international group of experts and stakeholders developed the Template for Intervention Description and Replication (TIDieR) checklist and guide. The process involved a literature review for relevant checklists and research, a Delphi survey of an international panel of experts to guide item selection, and a face to face panel meeting. The resultant 12 item TIDieR checklist (brief name, why, what (materials), what (procedure), who provided, how, where, when and how much, tailoring, modifications, how well (planned), how well (actual)) is an extension of the CONSORT 2010 statement (item 5) and the SPIRIT 2013 statement (item 11). While the emphasis of the checklist is on trials, the guidance is intended to apply across all evaluative study designs. This paper presents the TIDieR checklist and guide, with an explanation and elaboration for each item, and examples of good reporting. The TIDieR checklist and guide should improve the reporting of interventions and make it easier for authors to structure accounts of their interventions, reviewers and editors to assess the descriptions, and readers to use the information.


Annals of Internal Medicine | 2013

SPIRIT 2013 Statement: defining standard protocol items for clinical trials.

An-Wen Chan; Jennifer Tetzlaff; Douglas G. Altman; Andreas Laupacis; Peter C Gøtzsche; Karmela Krleža-Jerić; Asbjørn Hróbjartsson; Howard Mann; Kay Dickersin; Jesse A. Berlin; Caroline J Doré; Wendy R. Parulekar; William Summerskill; Trish Groves; Kenneth F. Schulz; Harold C. Sox; Frank Rockhold; Drummond Rennie; David Moher

The protocol of a clinical trial serves as the foundation for study planning, conduct, reporting, and appraisal. However, trial protocols and existing protocol guidelines vary greatly in content and quality. This article describes the systematic development and scope of SPIRIT (Standard Protocol Items: Recommendations for Interventional Trials) 2013, a guideline for the minimum content of a clinical trial protocol.The 33-item SPIRIT checklist applies to protocols for all clinical trials and focuses on content rather than format. The checklist recommends a full description of what is planned; it does not prescribe how to design or conduct a trial. By providing guidance for key content, the SPIRIT recommendations aim to facilitate the drafting of high-quality protocols. Adherence to SPIRIT would also enhance the transparency and completeness of trial protocols for the benefit of investigators, trial participants, patients, sponsors, funders, research ethics committees or institutional review boards, peer reviewers, journals, trial registries, policymakers, regulators, and other key stakeholders.


BMJ | 2013

SPIRIT 2013 explanation and elaboration: guidance for protocols of clinical trials.

An-Wen Chan; Jennifer Tetzlaff; Peter C Gøtzsche; Douglas G. Altman; Howard Mann; Jesse A. Berlin; Kay Dickersin; Asbjørn Hróbjartsson; Kenneth F. Schulz; Wendy R. Parulekar; Karmela Krleza-Jeric; Andreas Laupacis; David Moher

High quality protocols facilitate proper conduct, reporting, and external review of clinical trials. However, the completeness of trial protocols is often inadequate. To help improve the content and quality of protocols, an international group of stakeholders developed the SPIRIT 2013 Statement (Standard Protocol Items: Recommendations for Interventional Trials). The SPIRIT Statement provides guidance in the form of a checklist of recommended items to include in a clinical trial protocol. This SPIRIT 2013 Explanation and Elaboration paper provides important information to promote full understanding of the checklist recommendations. For each checklist item, we provide a rationale and detailed description; a model example from an actual protocol; and relevant references supporting its importance. We strongly recommend that this explanatory paper be used in conjunction with the SPIRIT Statement. A website of resources is also available (www.spirit-statement.org). The SPIRIT 2013 Explanation and Elaboration paper, together with the Statement, should help with the drafting of trial protocols. Complete documentation of key trial elements can facilitate transparency and protocol review for the benefit of all stakeholders.


BMJ | 2016

ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions

Jonathan A C Sterne; Miguel A. Hernán; Barnaby C Reeves; Jelena Savovic; Nancy D Berkman; Meera Viswanathan; David Henry; Douglas G. Altman; Mohammed T Ansari; Isabelle Boutron; James Carpenter; An-Wen Chan; Rachel Churchill; Jonathan J Deeks; Asbjørn Hróbjartsson; Jamie Kirkham; Peter Jüni; Yoon K. Loke; Theresa D Pigott; Craig Ramsay; Deborah Regidor; Hannah R. Rothstein; Lakhbir Sandhu; Pasqualina Santaguida; Holger J. Schunemann; B. Shea; Ian Shrier; Peter Tugwell; Lucy Turner; Jeffrey C. Valentine

Non-randomised studies of the effects of interventions are critical to many areas of healthcare evaluation, but their results may be biased. It is therefore important to understand and appraise their strengths and weaknesses. We developed ROBINS-I (“Risk Of Bias In Non-randomised Studies - of Interventions”), a new tool for evaluating risk of bias in estimates of the comparative effectiveness (harm or benefit) of interventions from studies that did not use randomisation to allocate units (individuals or clusters of individuals) to comparison groups. The tool will be particularly useful to those undertaking systematic reviews that include non-randomised studies.


BMJ | 2005

Identifying outcome reporting bias in randomised trials on PubMed: review of publications and survey of authors.

An-Wen Chan; Douglas G. Altman

Abstract Objective To examine the extent and nature of outcome reporting bias in a broad cohort of published randomised trials. Design Retrospective review of publications and follow up survey of authors. Cohort All journal articles of randomised trials indexed in PubMed whose primary publication appeared in December 2000. Main outcome measures Prevalence of incompletely reported outcomes per trial; reasons for not reporting outcomes; association between completeness of reporting and statistical significance. Results 519 trials with 553 publications and 10 557 outcomes were identified. Survey responders (response rate 69%) provided information on unreported outcomes but were often unreliable—for 32% of those who denied the existence of such outcomes there was evidence to the contrary in their publications. On average, over 20% of the outcomes measured in a parallel group trial were incompletely reported. Within a trial, such outcomes had a higher odds of being statistically non-significant compared with fully reported outcomes (odds ratio 2.0 (95% confidence interval 1.6 to 2.7) for efficacy outcomes; 1.9 (1.1 to 3.5) for harm outcomes). The most commonly reported reasons for omitting efficacy outcomes included space constraints, lack of clinical importance, and lack of statistical significance. Conclusions Incomplete reporting of outcomes within published articles of randomised trials is common and is associated with statistical non-significance. The medical literature therefore represents a selective and biased subset of study outcomes, and trial protocols should be made publicly available.


Canadian Medical Association Journal | 2004

Outcome reporting bias in randomized trials funded by the Canadian Institutes of Health Research

An-Wen Chan; Karmela Krleza-Jeric; Isabelle Schmid; Douglas G. Altman

Background: The reporting of outcomes within published randomized trials has previously been shown to be incomplete, biased and inconsistent with study protocols. We sought to determine whether outcome reporting bias would be present in a cohort of government-funded trials subjected to rigorous peer review. Methods: We compared protocols for randomized trials approved for funding by the Canadian Institutes of Health Research (formerly the Medical Research Council of Canada) from 1990 to 1998 with subsequent reports of the trials identified in journal publications. Characteristics of reported and unreported outcomes were recorded from the protocols and publications. Incompletely reported outcomes were defined as those with insufficient data provided in publications for inclusion in meta-analyses. An overall odds ratio measuring the association between completeness of reporting and statistical significance was calculated stratified by trial. Finally, primary outcomes specified in trial protocols were compared with those reported in publications. Results: We identified 48 trials with 68 publications and 1402 outcomes. The median number of participants per trial was 299, and 44% of the trials were published in general medical journals. A median of 31% (10th–90th percentile range 5%–67%) of outcomes measured to assess the efficacy of an intervention (efficacy outcomes) and 59% (0%–100%) of those measured to assess the harm of an intervention (harm outcomes) per trial were incompletely reported. Statistically significant efficacy outcomes had a higher odds than nonsignificant efficacy outcomes of being fully reported (odds ratio 2.7; 95% confidence interval 1.5–5.0). Primary outcomes differed between protocols and publications for 40% of the trials. Interpretation: Selective reporting of outcomes frequently occurs in publications of high-quality government-funded trials.


The Lancet | 2014

Increasing value and reducing waste: addressing inaccessible research

An-Wen Chan; Fujian Song; Andrew J. Vickers; Tom Jefferson; Kay Dickersin; Peter C Gøtzsche; Harlan M. Krumholz; Davina Ghersi; H. Bart van der Worp

The methods and results of health research are documented in study protocols, full study reports (detailing all analyses), journal reports, and participant-level datasets. However, protocols, full study reports, and participant-level datasets are rarely available, and journal reports are available for only half of all studies and are plagued by selective reporting of methods and results. Furthermore, information provided in study protocols and reports varies in quality and is often incomplete. When full information about studies is inaccessible, billions of dollars in investment are wasted, bias is introduced, and research and care of patients are detrimentally affected. To help to improve this situation at a systemic level, three main actions are warranted. First, academic institutions and funders should reward investigators who fully disseminate their research protocols, reports, and participant-level datasets. Second, standards for the content of protocols and full study reports and for data sharing practices should be rigorously developed and adopted for all types of health research. Finally, journals, funders, sponsors, research ethics committees, regulators, and legislators should endorse and enforce policies supporting study registration and wide availability of journal reports, full study reports, and participant-level datasets.


BMJ | 2010

The quality of reports of randomised trials in 2000 and 2006: comparative study of articles indexed in PubMed.

Sally Hopewell; Susan Dutton; Ly-Mee Yu; An-Wen Chan; Douglas G. Altman

Objectives To examine the reporting characteristics and methodological details of randomised trials indexed in PubMed in 2000 and 2006 and assess whether the quality of reporting has improved after publication of the Consolidated Standards of Reporting Trials (CONSORT) Statement in 2001. Design Comparison of two cross sectional investigations. Study sample All primary reports of randomised trials indexed in PubMed in December 2000 (n=519) and December 2006 (n=616), including parallel group, crossover, cluster, factorial, and split body study designs. Main outcome measures The proportion of general and methodological items reported, stratified by year and study design. Risk ratios with 95% confidence intervals were calculated to represent changes in reporting between 2000 and 2006. Results The majority of trials were two arm (379/519 (73%) in 2000 v 468/616 (76%) in 2006) parallel group studies (383/519 (74%) v 477/616 (78%)) published in specialty journals (482/519 (93%) v 555/616 (90%)). In both 2000 and 2006, a median of 80 participants were recruited per trial for parallel group trials. The proportion of articles that reported drug trials decreased between 2000 and 2006 (from 393/519 (76%) to 356/616 (58%)), whereas the proportion of surgery trials increased (51/519 (10%) v 128/616 (21%)). There was an increase between 2000 and 2006 in the proportion of trial reports that included details of the primary outcome (risk ratio (RR) 1.18, 95% CI 1.04 to 1.33), sample size calculation (RR 1.66, 95% CI 1.40 to 1.95), and the methods of random sequence generation (RR 1.62, 95% CI 1.32 to 1.97) and allocation concealment (RR 1.40, 95% CI 1.11 to 1.76). There was no difference in the proportion of trials that provided specific details on who was blinded (RR 0.91, 95% CI 0.75 to 1.10). Conclusions Reporting of several important aspects of trial methods improved between 2000 and 2006; however, the quality of reporting remains well below an acceptable level. Without complete and transparent reporting of how a trial was designed and conducted, it is difficult for readers to assess its conduct and validity.


PLOS Medicine | 2007

Ghost authorship in industry-initiated randomised trials.

Peter C Gøtzsche; Asbjørn Hróbjartsson; Helle Krogh Johansen; Mette T. Haahr; Douglas G. Altman; An-Wen Chan

Background Ghost authorship, the failure to name, as an author, an individual who has made substantial contributions to an article, may result in lack of accountability. The prevalence and nature of ghost authorship in industry-initiated randomised trials is not known. Methods and Findings We conducted a cohort study comparing protocols and corresponding publications for industry-initiated trials approved by the Scientific-Ethical Committees for Copenhagen and Frederiksberg in 1994–1995. We defined ghost authorship as present if individuals who wrote the trial protocol, performed the statistical analyses, or wrote the manuscript, were not listed as authors of the publication, or as members of a study group or writing committee, or in an acknowledgment. We identified 44 industry-initiated trials. We did not find any trial protocol or publication that stated explicitly that the clinical study report or the manuscript was to be written or was written by the clinical investigators, and none of the protocols stated that clinical investigators were to be involved with data analysis. We found evidence of ghost authorship for 33 trials (75%; 95% confidence interval 60%–87%). The prevalence of ghost authorship was increased to 91% (40 of 44 articles; 95% confidence interval 78%–98%) when we included cases where a person qualifying for authorship was acknowledged rather than appearing as an author. In 31 trials, the ghost authors we identified were statisticians. It is likely that we have overlooked some ghost authors, as we had very limited information to identify the possible omission of other individuals who would have qualified as authors. Conclusions Ghost authorship in industry-initiated trials is very common. Its prevalence could be considerably reduced, and transparency improved, if existing guidelines were followed, and if protocols were publicly available.


BMJ | 2005

Comparison of descriptions of allocation concealment in trial protocols and the published reports: cohort study

Julie Pildal; An-Wen Chan; Asbjørn Hróbjartsson; Elisabeth Forfang; Douglas G. Altman; Peter C Gøtzsche

Abstract Objectives To compare how allocation concealment is described in publications of randomised clinical trials and corresponding protocols, and to estimate how often trial publications with unclear allocation concealment have adequate concealment according to the protocol. Design Cohort study of 102 sets of trial protocols and corresponding publications. Setting Protocols of randomised trials approved by the scientific and ethical committees for Copenhagen and Frederiksberg, 1994 and 1995. Main outcome measures Frequency of adequate, unclear, and inadequate allocation concealment and sequence generation in trial publications compared with protocols, and the proportion of protocols where methods were reported to be adequate but descriptions were unclear in the trial publications. Results 96 of the 102 trials had unclear allocation concealment according to the trial publication. According to the protocols, 15 of these 96 trials had adequate allocation concealment (16%, 95% confidence interval 9% to 24%), 80 had unclear concealment (83%, 74% to 90%), and one had inadequate concealment. When retrospectively defined loose criteria for concealment were applied, 83 of the 102 trial publications had unclear concealment. According to their protocol, 33 of these 83 trials had adequate allocation concealment (40%, 29% to 51%), 49 had unclear concealment (59%, 48% to 70%), and one had inadequate concealment. Conclusions Most randomised clinical trials have unclear allocation concealment on the basis of the trial publication alone. Most of these trials also have unclear allocation concealment according to their protocol.

Collaboration


Dive into the An-Wen Chan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Moher

Ottawa Hospital Research Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jennifer Tetzlaff

Ottawa Hospital Research Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Karmela Krleža-Jerić

Canadian Institutes of Health Research

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge