Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Isabelle Boutron is active.

Publication


Featured researches published by Isabelle Boutron.


Annals of Internal Medicine | 2008

Extending the CONSORT statement to randomized trials of nonpharmacologic treatment: explanation and elaboration.

Isabelle Boutron; David Moher; Douglas G. Altman; Kenneth F. Schulz; Philippe Ravaud

Adequate reporting of randomized, controlled trials (RCTs) is necessary to allow accurate critical appraisal of the validity and applicability of the results. The CONSORT (Consolidated Standards of Reporting Trials) Statement, a 22-item checklist and flow diagram, is intended to address this problem by improving the reporting of RCTs. However, some specific issues that apply to trials of nonpharmacologic treatments (for example, surgery, technical interventions, devices, rehabilitation, psychotherapy, and behavioral intervention) are not specifically addressed in the CONSORT Statement. Furthermore, considerable evidence suggests that the reporting of nonpharmacologic trials still needs improvement. Therefore, the CONSORT group developed an extension of the CONSORT Statement for trials assessing nonpharmacologic treatments. A consensus meeting of 33 experts was organized in Paris, France, in February 2006, to develop an extension of the CONSORT Statement for trials of nonpharmacologic treatments. The participants extended 11 items from the CONSORT Statement, added 1 item, and developed a modified flow diagram. To allow adequate understanding and implementation of the CONSORT extension, the CONSORT group developed this elaboration and explanation document from a review of the literature to provide examples of adequate reporting. This extension, in conjunction with the main CONSORT Statement and other CONSORT extensions, should help to improve the reporting of RCTs performed in this field.


BMJ | 2014

Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide

Tammy Hoffmann; Paul Glasziou; Isabelle Boutron; Ruairidh Milne; Rafael Perera; David Moher; Douglas G. Altman; Virginia Barbour; Helen Macdonald; Marie Johnston; Sarah E Lamb; Mary Dixon-Woods; Peter McCulloch; Jeremy C. Wyatt; An-Wen Chan; Susan Michie

Without a complete published description of interventions, clinicians and patients cannot reliably implement interventions that are shown to be useful, and other researchers cannot replicate or build on research findings. The quality of description of interventions in publications, however, is remarkably poor. To improve the completeness of reporting, and ultimately the replicability, of interventions, an international group of experts and stakeholders developed the Template for Intervention Description and Replication (TIDieR) checklist and guide. The process involved a literature review for relevant checklists and research, a Delphi survey of an international panel of experts to guide item selection, and a face to face panel meeting. The resultant 12 item TIDieR checklist (brief name, why, what (materials), what (procedure), who provided, how, where, when and how much, tailoring, modifications, how well (planned), how well (actual)) is an extension of the CONSORT 2010 statement (item 5) and the SPIRIT 2013 statement (item 11). While the emphasis of the checklist is on trials, the guidance is intended to apply across all evaluative study designs. This paper presents the TIDieR checklist and guide, with an explanation and elaboration for each item, and examples of good reporting. The TIDieR checklist and guide should improve the reporting of interventions and make it easier for authors to structure accounts of their interventions, reviewers and editors to assess the descriptions, and readers to use the information.


BMJ | 2016

ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions

Jonathan A C Sterne; Miguel A. Hernán; Barnaby C Reeves; Jelena Savovic; Nancy D Berkman; Meera Viswanathan; David Henry; Douglas G. Altman; Mohammed T Ansari; Isabelle Boutron; James Carpenter; An-Wen Chan; Rachel Churchill; Jonathan J Deeks; Asbjørn Hróbjartsson; Jamie Kirkham; Peter Jüni; Yoon K. Loke; Theresa D Pigott; Craig Ramsay; Deborah Regidor; Hannah R. Rothstein; Lakhbir Sandhu; Pasqualina Santaguida; Holger J. Schunemann; B. Shea; Ian Shrier; Peter Tugwell; Lucy Turner; Jeffrey C. Valentine

Non-randomised studies of the effects of interventions are critical to many areas of healthcare evaluation, but their results may be biased. It is therefore important to understand and appraise their strengths and weaknesses. We developed ROBINS-I (“Risk Of Bias In Non-randomised Studies - of Interventions”), a new tool for evaluating risk of bias in estimates of the comparative effectiveness (harm or benefit) of interventions from studies that did not use randomisation to allocate units (individuals or clusters of individuals) to comparison groups. The tool will be particularly useful to those undertaking systematic reviews that include non-randomised studies.


JAMA | 2009

Comparison of Registered and Published Primary Outcomes in Randomized Controlled Trials

Sylvain Mathieu; Isabelle Boutron; David Moher; Douglas G. Altman; Philippe Ravaud

CONTEXT As of 2005, the International Committee of Medical Journal Editors required investigators to register their trials prior to participant enrollment as a precondition for publishing the trials findings in member journals. OBJECTIVE To assess the proportion of registered trials with results recently published in journals with high impact factors; to compare the primary outcomes specified in trial registries with those reported in the published articles; and to determine whether primary outcome reporting bias favored significant outcomes. DATA SOURCES AND STUDY SELECTION MEDLINE via PubMed was searched for reports of randomized controlled trials (RCTs) in 3 medical areas (cardiology, rheumatology, and gastroenterology) indexed in 2008 in the 10 general medical journals and specialty journals with the highest impact factors. DATA EXTRACTION For each included article, we obtained the trial registration information using a standardized data extraction form. RESULTS Of the 323 included trials, 147 (45.5%) were adequately registered (ie, registered before the end of the trial, with the primary outcome clearly specified). Trial registration was lacking for 89 published reports (27.6%), 45 trials (13.9%) were registered after the completion of the study, 39 (12%) were registered with no or an unclear description of the primary outcome, and 3 (0.9%) were registered after the completion of the study and had an unclear description of the primary outcome. Among articles with trials adequately registered, 31% (46 of 147) showed some evidence of discrepancies between the outcomes registered and the outcomes published. The influence of these discrepancies could be assessed in only half of them and in these statistically significant results were favored in 82.6% (19 of 23). CONCLUSION Comparison of the primary outcomes of RCTs registered with their subsequent publication indicated that selective outcome reporting is prevalent.


The Lancet | 2014

Reducing waste from incomplete or unusable reports of biomedical research

Paul Glasziou; Douglas G. Altman; Patrick M. Bossuyt; Isabelle Boutron; Mike Clarke; Steven A. Julious; Susan Michie; David Moher; Elizabeth Wager

Research publication can both communicate and miscommunicate. Unless research is adequately reported, the time and resources invested in the conduct of research is wasted. Reporting guidelines such as CONSORT, STARD, PRISMA, and ARRIVE aim to improve the quality of research reports, but all are much less adopted and adhered to than they should be. Adequate reports of research should clearly describe which questions were addressed and why, what was done, what was shown, and what the findings mean. However, substantial failures occur in each of these elements. For example, studies of published trial reports showed that the poor description of interventions meant that 40-89% were non-replicable; comparisons of protocols with publications showed that most studies had at least one primary outcome changed, introduced, or omitted; and investigators of new trials rarely set their findings in the context of a systematic review, and cited a very small and biased selection of previous relevant trials. Although best documented in reports of controlled trials, inadequate reporting occurs in all types of studies-animal and other preclinical studies, diagnostic studies, epidemiological studies, clinical prediction research, surveys, and qualitative studies. In this report, and in the Series more generally, we point to a waste at all stages in medical research. Although a more nuanced understanding of the complex systems involved in the conduct, writing, and publication of research is desirable, some immediate action can be taken to improve the reporting of research. Evidence for some recommendations is clear: change the current system of research rewards and regulations to encourage better and more complete reporting, and fund the development and maintenance of infrastructure to support better reporting, linkage, and archiving of all elements of research. However, the high amount of waste also warrants future investment in the monitoring of and research into reporting of research, and active implementation of the findings to ensure that research reports better address the needs of the range of research users.


The Lancet | 2009

Challenges in evaluating surgical innovation

P L Ergina; Jonathan Cook; Jane M Blazeby; Isabelle Boutron; Clavien P-A.; Barney Reeves; Christoph M. Seiler

Research on surgical interventions is associated with several methodological and practical challenges of which few, if any, apply only to surgery. However, surgical evaluation is especially demanding because many of these challenges coincide. In this report, the second of three on surgical innovation and evaluation, we discuss obstacles related to the study design of randomised controlled trials and non-randomised studies assessing surgical interventions. We also describe the issues related to the nature of surgical procedures-for example, their complexity, surgeon-related factors, and the range of outcomes. Although difficult, surgical evaluation is achievable and necessary. Solutions tailored to surgical research and a framework for generating evidence on which to base surgical practice are essential.


The Lancet | 2009

Evaluation and stages of surgical innovations

Jeffrey Barkun; J K Aronson; L S Feldman; Guy J. Maddern; Steven M. Strasberg; D G Altman; Jane M Blazeby; Isabelle Boutron; W B Campbell; Clavien P-A.; Jonathan Cook; P L Ergina; David R. Flum; Paul Glasziou; John C. Marshall; Peter McCulloch; Jon Nicholl; Barney Reeves; Christoph M. Seiler; J L Meakins; D Ashby; N Black; J Bunker; M Burton; M Campbell; K Chalkidou; Iain Chalmers; M.R. de Leval; J Deeks; A M Grant

Surgical innovation is an important part of surgical practice. Its assessment is complex because of idiosyncrasies related to surgical practice, but necessary so that introduction and adoption of surgical innovations can derive from evidence-based principles rather than trial and error. A regulatory framework is also desirable to protect patients against the potential harms of any novel procedure. In this first of three Series papers on surgical innovation and evaluation, we propose a five-stage paradigm to describe the development of innovative surgical procedures.


Canadian Medical Association Journal | 2013

Observer bias in randomized clinical trials with measurement scale outcomes: a systematic review of trials with both blinded and nonblinded assessors

Asbjørn Hróbjartsson; Ann Sofia Skou Thomsen; Frida Emanuelsson; Britta Tendal; Jørgen Hilden; Isabelle Boutron; Philippe Ravaud; Stig Brorson

Background: Clinical trials are commonly done without blinded outcome assessors despite the risk of bias. We wanted to evaluate the effect of nonblinded outcome assessment on estimated effects in randomized clinical trials with outcomes that involved subjective measurement scales. Methods: We conducted a systematic review of randomized clinical trials with both blinded and nonblinded assessment of the same measurement scale outcome. We searched PubMed, EMBASE, PsycINFO, CINAHL, Cochrane Central Register of Controlled Trials, HighWire Press and Google Scholar for relevant studies. Two investigators agreed on the inclusion of trials and the outcome scale. For each trial, we calculated the difference in effect size (i.e., standardized mean difference between nonblinded and blinded assessments). A difference in effect size of less than 0 suggested that nonblinded assessors generated more optimistic estimates of effect. We pooled the differences in effect size using inverse variance random-effects meta-analysis and used metaregression to identify potential reasons for variation. Results: We included 24 trials in our review. The main meta-analysis included 16 trials (involving 2854 patients) with subjective outcomes. The estimated treatment effect was more beneficial when based on nonblinded assessors (pooled difference in effect size −0.23 [95% confidence interval (CI) −0.40 to −0.06]). In relative terms, nonblinded assessors exaggerated the pooled effect size by 68% (95% CI 14% to 230%). Heterogeneity was moderate (I2 = 46%, p = 0.02) and unexplained by metaregression. Interpretation: We provide empirical evidence for observer bias in randomized clinical trials with subjective measurement scale outcomes. A failure to blind assessors of outcomes in such trials results in a high risk of substantial bias.


JAMA Internal Medicine | 2009

Reporting of Safety Results in Published Reports of Randomized Controlled Trials

Isabelle Pitrou; Isabelle Boutron; Nizar Ahmad; Philippe Ravaud

BACKGROUND Reports of clinical trials usually emphasize efficacy results, especially when results are statistically significant. Poor safety reporting can lead to misinterpretation and inadequate conclusions about the interventions assessed. Our aim was to describe the reporting of harm-related results from randomized controlled trials (RCTs). METHODS We searched the MEDLINE database for reports of RCTs published from January 1, 2006, through January 1, 2007, in 6 general medical journals with a high impact factor. Data were extracted by use of a standardized form to appraise the presentation of safety results in text and tables. RESULTS Adverse events were mentioned in 88.7% of the 133 reports. No information on severe adverse events and withdrawal of patients owing to an adverse event was given in 27.1% and 47.4% of articles, respectively. Restrictions in the reporting of harm-related data were noted in 43 articles (32.3%) with a description of the most common adverse events only (n = 17), severe adverse events only (n = 16), statistically significant events only (n = 5), and a combination of restrictions (n = 5). The population considered for safety analysis was clearly reported in 65.6% of articles. CONCLUSION Our review reveals important heterogeneity and variability in the reporting of harm-related results in publications of RCTs.


BMJ | 2010

Taking healthcare interventions from trial to practice

Paul Glasziou; Iain Chalmers; Douglas G. Altman; Hilda Bastian; Isabelle Boutron; Anne Brice; Gro Jamtvedt; Andrew Farmer; Davina Ghersi; Trish Groves; Carl Heneghan; Sophie Hill; Simon Lewin; Susan Michie; Rafael Perera; Valerie M. Pomeroy; Julie K. Tilson; Sasha Shepperd; John W Williams

The results of thousands of trials are never acted on because their published reports do not describe the interventions in enough detail. How can we improve the reporting?

Collaboration


Dive into the Isabelle Boutron's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Moher

Ottawa Hospital Research Institute

View shared research outputs
Top Co-Authors

Avatar

Philippe Ravaud

French Institute of Health and Medical Research

View shared research outputs
Top Co-Authors

Avatar

Serge Poiraudeau

Paris Descartes University

View shared research outputs
Top Co-Authors

Avatar

Gabriel Baron

Paris Descartes University

View shared research outputs
Top Co-Authors

Avatar

François Rannou

Paris Descartes University

View shared research outputs
Top Co-Authors

Avatar

Asbjørn Hróbjartsson

University of Southern Denmark

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge