Larissa Shamseer
University of Ottawa
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Larissa Shamseer.
Systematic Reviews | 2015
David Moher; Larissa Shamseer; Mike Clarke; Davina Ghersi; Alessandro Liberati; Mark Petticrew; Paul G. Shekelle; Lesley Stewart
Systematic reviews should build on a protocol that describes the rationale, hypothesis, and planned methods of the review; few reviews report whether a protocol exists. Detailed, well-described protocols can facilitate the understanding and appraisal of the review methods, as well as the detection of modifications to methods and selective reporting in completed reviews. We describe the development of a reporting guideline, the Preferred Reporting Items for Systematic reviews and Meta-Analyses for Protocols 2015 (PRISMA-P 2015). PRISMA-P consists of a 17-item checklist intended to facilitate the preparation and reporting of a robust protocol for the systematic review. Funders and those commissioning reviews might consider mandating the use of the checklist to facilitate the submission of relevant protocol information in funding applications. Similarly, peer reviewers and editors can use the guidance to gauge the completeness and transparency of a systematic review protocol submitted for publication in a journal or other medium.
BMJ | 2015
Larissa Shamseer; David Moher; Mike Clarke; Davina Ghersi; Alessandro Liberati; Mark Petticrew; Paul G. Shekelle; Lesley Stewart
Protocols of systematic reviews and meta-analyses allow for planning and documentation of review methods, act as a guard against arbitrary decision making during review conduct, enable readers to assess for the presence of selective reporting against completed reviews, and, when made publicly available, reduce duplication of efforts and potentially prompt collaboration. Evidence documenting the existence of selective reporting and excessive duplication of reviews on the same or similar topics is accumulating and many calls have been made in support of the documentation and public availability of review protocols. Several efforts have emerged in recent years to rectify these problems, including development of an international register for prospective reviews (PROSPERO) and launch of the first open access journal dedicated to the exclusive publication of systematic review products, including protocols (BioMed Central’s Systematic Reviews). Furthering these efforts and building on the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) guidelines, an international group of experts has created a guideline to improve the transparency, accuracy, completeness, and frequency of documented systematic review and meta-analysis protocols—PRISMA-P (for protocols) 2015. The PRISMA-P checklist contains 17 items considered to be essential and minimum components of a systematic review or meta-analysis protocol. This PRISMA-P 2015 Explanation and Elaboration paper provides readers with a full understanding of and evidence about the necessity of each item as well as a model example from an existing published protocol. This paper should be read together with the PRISMA-P 2015 statement. Systematic review authors and assessors are strongly encouraged to make use of PRISMA-P when drafting and appraising review protocols.
Systematic Reviews | 2012
Lucy Turner; Larissa Shamseer; Douglas G. Altman; Kenneth F. Schulz; David Moher
BackgroundThe Consolidated Standards of Reporting Trials (CONSORT) Statement is intended to facilitate better reporting of randomised clinical trials (RCTs). A systematic review recently published in the Cochrane Library assesses whether journal endorsement of CONSORT impacts the completeness of reporting of RCTs; those findings are summarised here.MethodsEvaluations assessing the completeness of reporting of RCTs based on any of 27 outcomes formulated based on the 1996 or 2001 CONSORT checklists were included; two primary comparisons were evaluated. The 27 outcomes were: the 22 items of the 2001 CONSORT checklist, four sub-items describing blinding and a ‘total summary score’ of aggregate items, as reported. Relative risks (RR) and 99% confidence intervals were calculated to determine effect estimates for each outcome across evaluations.ResultsFifty-three reports describing 50 evaluations of 16,604 RCTs were assessed for adherence to at least one of 27 outcomes. Sixty-nine of 81 meta-analyses show relative benefit from CONSORT endorsement on completeness of reporting. Between endorsing and non-endorsing journals, 25 outcomes are improved with CONSORT endorsement, five of these significantly (α = 0.01). The number of evaluations per meta-analysis was often low with substantial heterogeneity; validity was assessed as low or unclear for many evaluations.ConclusionsThe results of this review suggest that journal endorsement of CONSORT may benefit the completeness of reporting of RCTs they publish. No evidence suggests that endorsement hinders the completeness of RCT reporting. However, despite relative improvements when CONSORT is endorsed by journals, the completeness of reporting of trials remains sub-optimal. Journals are not sending a clear message about endorsement to authors submitting manuscripts for publication. As such, fidelity of endorsement as an ‘intervention’ has been weak to date. Journals need to take further action regarding their endorsement and implementation of CONSORT to facilitate accurate, transparent and complete reporting of trials.
PLOS Medicine | 2016
Matthew J. Page; Larissa Shamseer; Douglas G. Altman; Jennifer Tetzlaff; Margaret Sampson; Andrea C. Tricco; Ferrán Catalá-López; Lun Li; Emma K. Reid; Rafael Sarkis-Onofre; David Moher
Background Systematic reviews (SRs) can help decision makers interpret the deluge of published biomedical literature. However, a SR may be of limited use if the methods used to conduct the SR are flawed, and reporting of the SR is incomplete. To our knowledge, since 2004 there has been no cross-sectional study of the prevalence, focus, and completeness of reporting of SRs across different specialties. Therefore, the aim of our study was to investigate the epidemiological and reporting characteristics of a more recent cross-section of SRs. Methods and Findings We searched MEDLINE to identify potentially eligible SRs indexed during the month of February 2014. Citations were screened using prespecified eligibility criteria. Epidemiological and reporting characteristics of a random sample of 300 SRs were extracted by one reviewer, with a 10% sample extracted in duplicate. We compared characteristics of Cochrane versus non-Cochrane reviews, and the 2014 sample of SRs versus a 2004 sample of SRs. We identified 682 SRs, suggesting that more than 8,000 SRs are being indexed in MEDLINE annually, corresponding to a 3-fold increase over the last decade. The majority of SRs addressed a therapeutic question and were conducted by authors based in China, the UK, or the US; they included a median of 15 studies involving 2,072 participants. Meta-analysis was performed in 63% of SRs, mostly using standard pairwise methods. Study risk of bias/quality assessment was performed in 70% of SRs but was rarely incorporated into the analysis (16%). Few SRs (7%) searched sources of unpublished data, and the risk of publication bias was considered in less than half of SRs. Reporting quality was highly variable; at least a third of SRs did not report use of a SR protocol, eligibility criteria relating to publication status, years of coverage of the search, a full Boolean search logic for at least one database, methods for data extraction, methods for study risk of bias assessment, a primary outcome, an abstract conclusion that incorporated study limitations, or the funding source of the SR. Cochrane SRs, which accounted for 15% of the sample, had more complete reporting than all other types of SRs. Reporting has generally improved since 2004, but remains suboptimal for many characteristics. Conclusions An increasing number of SRs are being published, and many are poorly conducted and reported. Strategies are needed to help reduce this avoidable waste in research.
BMJ | 2014
Adrienne Stevens; Larissa Shamseer; Erica Weinstein; F Yazdi; Lucy Turner; Justin Thielman; Douglas G. Altman; Allison Hirst; John Hoey; Anita Palepu; Kenneth F. Schulz; David Moher
Objective To assess whether the completeness of reporting of health research is related to journals’ endorsement of reporting guidelines. Design Systematic review. Data sources Reporting guidelines from a published systematic review and the EQUATOR Network (October 2011). Studies assessing the completeness of reporting by using an included reporting guideline (termed “evaluations”) (1990 to October 2011; addendum searches in January 2012) from searches of either Medline, Embase, and the Cochrane Methodology Register or Scopus, depending on reporting guideline name. Study selection English language reporting guidelines that provided explicit guidance for reporting, described the guidance development process, and indicated use of a consensus development process were included. The CONSORT statement was excluded, as evaluations of adherence to CONSORT had previously been reviewed. English or French language evaluations of included reporting guidelines were eligible if they assessed the completeness of reporting of studies as a primary intent and those included studies enabled the comparisons of interest (that is, after versus before journal endorsement and/or endorsing versus non-endorsing journals). Data extraction Potentially eligible evaluations of included guidelines were screened initially by title and abstract and then as full text reports. If eligibility was unclear, authors of evaluations were contacted; journals’ websites were consulted for endorsement information where needed. The completeness of reporting of reporting guidelines was analyzed in relation to endorsement by item and, where consistent with the authors’ analysis, a mean summed score. Results 101 reporting guidelines were included. Of 15 249 records retrieved from the search for evaluations, 26 evaluations that assessed completeness of reporting in relation to endorsement for nine reporting guidelines were identified. Of those, 13 evaluations assessing seven reporting guidelines (BMJ economic checklist, CONSORT for harms, PRISMA, QUOROM, STARD, STRICTA, and STROBE) could be analyzed. Reporting guideline items were assessed by few evaluations. Conclusions The completeness of reporting of only nine of 101 health research reporting guidelines (excluding CONSORT) has been evaluated in relation to journals’ endorsement. Items from seven reporting guidelines were quantitatively analyzed, by few evaluations each. Insufficient evidence exists to determine the relation between journals’ endorsement of reporting guidelines and the completeness of reporting of published health research reports. Journal editors and researchers should consider collaborative prospectively designed, controlled studies to provide more robust evidence. Systematic review registration Not registered; no known register currently accepts protocols for methodology systematic reviews.
Pediatrics | 2010
Bradley C. Johnston; Larissa Shamseer; Bruno R. da Costa; Ross T. Tsuyuki; Sunita Vohra
BACKGROUND: Worldwide, diarrheal diseases rank second among conditions that afflict children. Despite the disease burden, there is limited consensus on how to define and measure pediatric acute diarrhea in trials. OBJECTIVES: In RCTs of children involving acute diarrhea as the primary outcome, we documented (1) how acute diarrhea and its resolution were defined, (2) all primary outcomes, (3) the psychometric properties of instruments used to measure acute diarrhea and (4) the methodologic quality of included trials, as reported. METHODS: We searched CENTRAL, Embase, Global Health, and Medline from inception to February 2009. English-language RCTs of children younger than 19 years that measured acute diarrhea as a primary outcome were chosen. RESULTS: We identified 138 RCTs reporting on 1 or more primary outcomes related to pediatric acute diarrhea/diseases. Included trials used 64 unique definitions of diarrhea, 69 unique definitions of diarrhea resolution, and 46 unique primary outcomes. The majority of included trials evaluated short-term clinical disease activity (incidence and duration of diarrhea), laboratory outcomes, or a composite of these end points. Thirty-two trials used instruments (eg, single and multidomain scoring systems) to support assessment of disease activity. Of these, 3 trials stated that their instrument was valid; however, none of the trials (or their citations) reported evidence of this validity. The overall methodologic quality of included trials was good. CONCLUSIONS: Even in what would be considered methodologically sound clinical trials, definitions of diarrhea, primary outcomes, and instruments employed in RCTs of pediatric acute diarrhea are heterogeneous, lack evidence of validity, and focus on indices that may not be important to participants.
BMJ | 2014
Sally Hopewell; Gary S. Collins; Isabelle Boutron; Ly-Mee Yu; Jonathan Cook; Milensu Shanyinde; Rose Wharton; Larissa Shamseer; Douglas G. Altman
Objective To investigate the effectiveness of open peer review as a mechanism to improve the reporting of randomised trials published in biomedical journals. Design Retrospective before and after study. Setting BioMed Central series medical journals. Sample 93 primary reports of randomised trials published in BMC-series medical journals in 2012. Main outcome measures Changes to the reporting of methodological aspects of randomised trials in manuscripts after peer review, based on the CONSORT checklist, corresponding peer reviewer reports, the type of changes requested, and the extent to which authors adhered to these requests. Results Of the 93 trial reports, 38% (n=35) did not describe the method of random sequence generation, 54% (n=50) concealment of allocation sequence, 50% (n=46) whether the study was blinded, 34% (n=32) the sample size calculation, 35% (n=33) specification of primary and secondary outcomes, 55% (n=51) results for the primary outcome, and 90% (n=84) details of the trial protocol. The number of changes between manuscript versions was relatively small; most involved adding new information or altering existing information. Most changes requested by peer reviewers had a positive impact on the reporting of the final manuscript—for example, adding or clarifying randomisation and blinding (n=27), sample size (n=15), primary and secondary outcomes (n=16), results for primary or secondary outcomes (n=14), and toning down conclusions to reflect the results (n=27). Some changes requested by peer reviewers, however, had a negative impact, such as adding additional unplanned analyses (n=15). Conclusion Peer reviewers fail to detect important deficiencies in reporting of the methods and results of randomised trials. The number of these changes requested by peer reviewers was relatively small. Although most had a positive impact, some were inappropriate and could have a negative impact on reporting in the final publication.
BMC Complementary and Alternative Medicine | 2012
Sana Ishaque; Larissa Shamseer; Cecilia Bukutu; Sunita Vohra
BackgroundRhodiola rosea (R. rosea) is grown at high altitudes and northern latitudes. Due to its purported adaptogenic properties, it has been studied for its performance-enhancing capabilities in healthy populations and its therapeutic properties in a number of clinical populations. To systematically review evidence of efficacy and safety of R. rosea for physical and mental fatigue.MethodsSix electronic databases were searched to identify randomized controlled trials (RCTs) and controlled clinical trials (CCTs), evaluating efficacy and safety of R. rosea for physical and mental fatigue. Two reviewers independently screened the identified literature, extracted data and assessed risk of bias for included studies.ResultsOf 206 articles identified in the search, 11 met inclusion criteria for this review. Ten were described as RCTs and one as a CCT. Two of six trials examining physical fatigue in healthy populations report R. rosea to be effective as did three of five RCTs evaluating R. rosea for mental fatigue. All of the included studies exhibit either a high risk of bias or have reporting flaws that hinder assessment of their true validity (unclear risk of bias).ConclusionResearch regarding R. rosea efficacy is contradictory. While some evidence suggests that the herb may be helpful for enhancing physical performance and alleviating mental fatigue, methodological flaws limit accurate assessment of efficacy. A rigorously-designed well reported RCT that minimizes bias is needed to determine true efficacy of R. rosea for fatigue.
BMJ | 2015
Sunita Vohra; Larissa Shamseer; Margaret Sampson; Cecilia Bukutu; Christopher H. Schmid; Robyn Tate; Jane Nikles; Deborah Zucker; Richard L. Kravitz; Gordon H. Guyatt; Douglas G. Altman; David Moher
N-of-1 trials provide a mechanism for making evidence based treatment decisions for an individual patient. They use key methodological elements of group clinical trials to evaluate treatment effectiveness in a single patient, for situations that cannot always accommodate large scale trials: rare diseases, comorbid conditions, or in patients using concurrent therapies. Improvement in the reporting and clarity of methods and findings in N-of-1 trials is essential for reader to gauge the validity of trials and to replicate successful findings. A CONSORT extension for N-of-1 trials (CENT 2015) provides guidance on the reporting of individual and series of N-of-1 trials. CENT provides additional guidance for 14 of the 25 items of the CONSORT 2010 checklist and recommends a diagram for depicting an individual N-of-1 trial and modifies the CONSORT flow diagram to address the flow of a series of N-of-1 trials. The rationale, development process, and CENT 2015 checklist and diagrams are reported in this document.
American Journal of Occupational Therapy | 2016
Robyn Tate; Michael Perdices; Ulrike Rosenkoetter; William R. Shadish; Sunita Vohra; David H. Barlow; Robert H. Horner; Alan E. Kazdin; Thomas R. Kratochwill; Skye McDonald; Margaret Sampson; Larissa Shamseer; Leanne Togher; Richard W. Albin; Catherine L. Backman; Jacinta Douglas; Jonathan Evans; David L. Gast; Rumen Manolov; Geoffrey Mitchell; Lyndsey Nickels; Jane Nikles; Tamara Ownsworth; Miranda Rose; Christopher H. Schmid; Barbara A. Wilson
Reporting guidelines, such as the Consolidated Standards of Reporting Trials (CONSORT) Statement, improve the reporting of research in the medical literature (Turner et al., 2012). Many such guidelines exist, and the CONSORT Extension to Nonpharmacological Trials (Boutron et al., 2008) provides suitable guidance for reporting between-groups intervention studies in the behavioral sciences. The CONSORT Extension for N-of-1 Trials (CENT 2015) was developed for multiple crossover trials with single individuals in the medical sciences (Shamseer et al., 2015; Vohra et al., 2015), but there is no reporting guideline in the CONSORT tradition for single-case research used in the behavioral sciences. We developed the Single-Case Reporting guideline In Behavioral interventions (SCRIBE) 2016 to meet this need. This Statement article describes the methodology of the development of the SCRIBE 2016, along with the outcome of 2 Delphi surveys and a consensus meeting of experts. We present the resulting 26-item SCRIBE 2016 checklist. The article complements the more detailed SCRIBE 2016 Explanation and Elaboration article (Tate et al., 2016) that provides a rationale for each of the items and examples of adequate reporting from the literature. Both these resources will assist authors to prepare reports of single-case research with clarity, completeness, accuracy, and transparency. They will also provide journal reviewers and editors with a practical checklist against which such reports may be critically evaluated.