Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Adrienne Stevens is active.

Publication


Featured researches published by Adrienne Stevens.


BMJ | 2014

Relation of completeness of reporting of health research to journals' endorsement of reporting guidelines: systematic review.

Adrienne Stevens; Larissa Shamseer; Erica Weinstein; F Yazdi; Lucy Turner; Justin Thielman; Douglas G. Altman; Allison Hirst; John Hoey; Anita Palepu; Kenneth F. Schulz; David Moher

Objective To assess whether the completeness of reporting of health research is related to journals’ endorsement of reporting guidelines. Design Systematic review. Data sources Reporting guidelines from a published systematic review and the EQUATOR Network (October 2011). Studies assessing the completeness of reporting by using an included reporting guideline (termed “evaluations”) (1990 to October 2011; addendum searches in January 2012) from searches of either Medline, Embase, and the Cochrane Methodology Register or Scopus, depending on reporting guideline name. Study selection English language reporting guidelines that provided explicit guidance for reporting, described the guidance development process, and indicated use of a consensus development process were included. The CONSORT statement was excluded, as evaluations of adherence to CONSORT had previously been reviewed. English or French language evaluations of included reporting guidelines were eligible if they assessed the completeness of reporting of studies as a primary intent and those included studies enabled the comparisons of interest (that is, after versus before journal endorsement and/or endorsing versus non-endorsing journals). Data extraction Potentially eligible evaluations of included guidelines were screened initially by title and abstract and then as full text reports. If eligibility was unclear, authors of evaluations were contacted; journals’ websites were consulted for endorsement information where needed. The completeness of reporting of reporting guidelines was analyzed in relation to endorsement by item and, where consistent with the authors’ analysis, a mean summed score. Results 101 reporting guidelines were included. Of 15 249 records retrieved from the search for evaluations, 26 evaluations that assessed completeness of reporting in relation to endorsement for nine reporting guidelines were identified. Of those, 13 evaluations assessing seven reporting guidelines (BMJ economic checklist, CONSORT for harms, PRISMA, QUOROM, STARD, STRICTA, and STROBE) could be analyzed. Reporting guideline items were assessed by few evaluations. Conclusions The completeness of reporting of only nine of 101 health research reporting guidelines (excluding CONSORT) has been evaluated in relation to journals’ endorsement. Items from seven reporting guidelines were quantitatively analyzed, by few evaluations each. Insufficient evidence exists to determine the relation between journals’ endorsement of reporting guidelines and the completeness of reporting of published health research reports. Journal editors and researchers should consider collaborative prospectively designed, controlled studies to provide more robust evidence. Systematic review registration Not registered; no known register currently accepts protocols for methodology systematic reviews.


Systematic Reviews | 2012

Effectiveness of brief interventions as part of the screening, brief intervention and referral to treatment (SBIRT) model for reducing the non-medical use of psychoactive substances: a systematic review protocol

Matthew M. Young; Adrienne Stevens; Amy J. Porath-Waller; Tyler Pirie; Chantelle Garritty; Becky Skidmore; Lucy Turner; Cheryl Arratoon; Nancy Haley; Karen Leslie; Rhoda Reardon; Beth Sproule; Jeremy Grimshaw; David Moher

BackgroundThere is a significant public health burden associated with substance use in Canada. The early detection and/or treatment of risky substance use has the potential to dramatically improve outcomes for those who experience harms from the non-medical use of psychoactive substances, particularly adolescents whose brains are still undergoing development. The Screening, Brief Intervention, and Referral to Treatment model is a comprehensive, integrated approach for the delivery of early intervention and treatment services for individuals experiencing substance use-related harms, as well as those who are at risk of experiencing such harm.MethodsThis article describes the protocol for a systematic review of the effectiveness of brief interventions as part of the Screening, Brief Intervention, and Referral to Treatment model for reducing the non-medical use of psychoactive substances. Studies will be selected in which brief interventions target non-medical psychoactive substance use (excluding alcohol, nicotine, or caffeine) among those 12 years and older who are opportunistically screened and deemed at risk of harms related to psychoactive substance use. We will include one-on-one verbal interventions and exclude non-verbal brief interventions (for example, the provision of information such as a pamphlet or online interventions) and group interventions. Primary, secondary and adverse outcomes of interest are prespecified. Randomized controlled trials will be included; non-randomized controlled trials, controlled before-after studies and interrupted time series designs will be considered in the absence of randomized controlled trials. We will search several bibliographic databases (for example, MEDLINE, EMBASE, CINAHL, PsycINFO, CORK) and search sources for grey literature. We will meta-analyze studies where possible. We will conduct subgroup analyses, if possible, according to drug class and intervention setting.DiscussionThis review will provide evidence on the effectiveness of brief interventions as part of the Screening, Brief Intervention, and Referral to Treatment protocol aimed at the non-medical use of psychoactive substances and may provide guidance as to where future research might be most beneficial.


Systematic Reviews | 2015

Rapid review programs to support health care and policy decision making: a descriptive analysis of processes and methods

Julie Polisena; Chantelle Garritty; Chris Kamel; Adrienne Stevens; Ahmed M Abou-Setta

BackgroundHealth care decision makers often need to make decisions in limited timeframes and cannot await the completion of a full evidence review. Rapid reviews (RRs), utilizing streamlined systematic review methods, are increasingly being used to synthesize the evidence with a shorter turnaround time. Our primary objective was to describe the processes and methods used internationally to produce RRs. In addition, we sought to understand the underlying themes associated with these programs.MethodsWe contacted representatives of international RR programs from a broad realm in health care to gather information about the methods and processes used to produce RRs. The responses were summarized narratively to understand the characteristics associated with their processes and methods. The summaries were compared and contrasted to highlight potential themes and trends related to the different RR programs.ResultsTwenty-nine international RR programs were included in our sample with a broad organizational representation from academia, government, research institutions, and non-for-profit organizations. Responses revealed that the main objectives for RRs were to inform decision making with regards to funding health care technologies, services and policy, and program development. Central themes that influenced the methods used by RR programs, and report type and dissemination were the imposed turnaround time to complete a report, resources available, the complexity and sensitivity of the research topics, and permission from the requestor.ConclusionsOur study confirmed that there is no standard approach to conduct RRs. Differences in processes and methods across programs may be the result of the novelty of RR methods versus other types of evidence syntheses, customization of RRs for various decision makers, and definition of ‘rapid’ by organizations, since it impacts both the timelines and the evidence synthesis methods. Future research should investigate the impact of current RR methods and reporting to support informed health care decision making, the effects of potential biases that may be introduced with streamlined methods, and the effectiveness of RR reporting guidelines on transparency.


JAMA | 2018

Preferred Reporting Items for a Systematic Review and Meta-analysis of Diagnostic Test Accuracy Studies: The PRISMA-DTA Statement

Matthew D. F. McInnes; David Moher; Brett D. Thombs; Trevor A. McGrath; Patrick M. Bossuyt; Tammy Clifford; Jérémie F. Cohen; Jonathan J Deeks; Constantine Gatsonis; Lotty Hooft; Harriet Hunt; Chris Hyde; Daniël A. Korevaar; Mariska M.G. Leeflang; Petra Macaskill; Johannes B. Reitsma; Rachel Rodin; Anne Ws Rutjes; Jean Paul Salameh; Adrienne Stevens; Yemisi Takwoingi; Marcello Tonelli; Laura Weeks; Penny F Whiting; Brian H. Willis

Importance Systematic reviews of diagnostic test accuracy synthesize data from primary diagnostic studies that have evaluated the accuracy of 1 or more index tests against a reference standard, provide estimates of test performance, allow comparisons of the accuracy of different tests, and facilitate the identification of sources of variability in test accuracy. Objective To develop the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) diagnostic test accuracy guideline as a stand-alone extension of the PRISMA statement. Modifications to the PRISMA statement reflect the specific requirements for reporting of systematic reviews and meta-analyses of diagnostic test accuracy studies and the abstracts for these reviews. Design Established standards from the Enhancing the Quality and Transparency of Health Research (EQUATOR) Network were followed for the development of the guideline. The original PRISMA statement was used as a framework on which to modify and add items. A group of 24 multidisciplinary experts used a systematic review of articles on existing reporting guidelines and methods, a 3-round Delphi process, a consensus meeting, pilot testing, and iterative refinement to develop the PRISMA diagnostic test accuracy guideline. The final version of the PRISMA diagnostic test accuracy guideline checklist was approved by the group. Findings The systematic review (produced 64 items) and the Delphi process (provided feedback on 7 proposed items; 1 item was later split into 2 items) identified 71 potentially relevant items for consideration. The Delphi process reduced these to 60 items that were discussed at the consensus meeting. Following the meeting, pilot testing and iterative feedback were used to generate the 27-item PRISMA diagnostic test accuracy checklist. To reflect specific or optimal contemporary systematic review methods for diagnostic test accuracy, 8 of the 27 original PRISMA items were left unchanged, 17 were modified, 2 were added, and 2 were omitted. Conclusions and Relevance The 27-item PRISMA diagnostic test accuracy checklist provides specific guidance for reporting of systematic reviews. The PRISMA diagnostic test accuracy guideline can facilitate the transparent reporting of reviews, and may assist in the evaluation of validity and applicability, enhance replicability of reviews, and make the results from systematic reviews of diagnostic test accuracy studies more useful.


Academic Emergency Medicine | 2015

Effectiveness and Safety of Short‐stay Units in the Emergency Department: A Systematic Review

James Galipeau; Kusala Pussegoda; Adrienne Stevens; Jamie C. Brehaut; Janet Curran; Alan J. Forster; Michael Tierney; Edmund S.H. Kwok; James Worthington; Samuel G. Campbell; David Moher

OBJECTIVES Overcrowding is a serious and ongoing challenge in Canadian hospital emergency departments (EDs) that has been shown to have negative consequences for patient outcomes. The American College of Emergency Physicians recommends observation/short-stay units as a possible solution to alleviate this problem. However, the most recent systematic review assessing short-stay units shows that there is limited synthesized evidence to support this recommendation; it is over a decade old and has important methodologic limitations. The aim of this study was to conduct a more methodologically rigorous systematic review to update the evidence on the effectiveness and safety of short-stay units, compared with usual care, on hospital and patient outcomes. METHODS A literature search was conducted using MEDLINE, the Cochrane Library, Embase, ABI/INFOM, and EconLit databases and gray literature sources. Randomized controlled trials of ED short-stay units (stay of 72 hours or less) were compared with usual care (i.e., not provided in a short-stay unit), for adult patients. Risk-of-bias assessments were conducted. Important decision-making (gradable) outcomes were patient outcomes, quality of care, utilization of and access to services, resource use, health system-related outcomes, economic outcomes, and adverse events. RESULTS Ten reports of five studies were included, all of which compared short-stay units with inpatient care. Studies had small sample sizes and were collectively at a moderate risk of bias. Most outcomes were only reported by one study and the remaining outcomes were reported by two to four studies. No deaths were reported. Three of the four included studies reporting length of stay found a significant reduction among short-stay unit patients, and one of the two studies reporting readmission rates found a significantly lower rate for short-stay unit patients. All four economic evaluations indicated that short-stay units were a cost-saving intervention compared to inpatient care from both hospital and health care system perspectives. Results were mixed for outcomes related to quality of care and patient satisfaction. CONCLUSIONS Insufficient evidence exists to make conclusions regarding the effectiveness and safety of short-stay units, compared with inpatient care.


PLOS ONE | 2015

Effectiveness of Personal Protective Equipment for Healthcare Workers Caring for Patients with Filovirus Disease: A Rapid Review.

Mona Hersi; Adrienne Stevens; Pauline Quach; Candyce Hamel; Kednapa Thavorn; Chantelle Garritty; Becky Skidmore; Constanza Vallenas; Susan L. Norris; Matthias Egger; Sergey Eremin; Mauricio Ferri; Nahoko Shindo; David Moher

Background A rapid review, guided by a protocol, was conducted to inform development of the World Health Organization’s guideline on personal protective equipment in the context of the ongoing (2013–present) Western African filovirus disease outbreak, with a focus on health care workers directly caring for patients with Ebola or Marburg virus diseases. Methods Electronic databases and grey literature sources were searched. Eligibility criteria initially included comparative studies on Ebola and Marburg virus diseases reported in English or French, but criteria were expanded to studies on other viral hemorrhagic fevers and non-comparative designs due to the paucity of studies. After title and abstract screening (two people to exclude), full-text reports of potentially relevant articles were assessed in duplicate. Fifty-seven percent of extraction information was verified. The Grading of Recommendations Assessment, Development and Evaluation framework was used to inform the quality of evidence assessments. Results Thirty non-comparative studies (8 related to Ebola virus disease) were located, and 27 provided data on viral transmission. Reporting of personal protective equipment components and infection prevention and control protocols was generally poor. Conclusions Insufficient evidence exists to draw conclusions regarding the comparative effectiveness of various types of personal protective equipment. Additional research is urgently needed to determine optimal PPE for health care workers caring for patients with filovirus.


Nature | 2017

Stop this waste of people, animals and money

David Moher; Larissa Shamseer; Kelly D. Cobey; Manoj M. Lalu; James Galipeau; Marc T. Avey; Nadera Ahmadzai; Mostafa Alabousi; Pauline Barbeau; Andrew Beck; Raymond Daniel; Robert Frank; Mona Ghannad; Candyce Hamel; Mona Hersi; Brian Hutton; Inga Isupov; Trevor A. McGrath; Matthew D. F. McInnes; Matthew J. Page; Misty Pratt; Kusala Pussegoda; Beverley Shea; Anubhav Srivastava; Adrienne Stevens; Kednapa Thavorn; Sasha van Katwyk; Roxanne Ward; Dianna Wolfe; Fatemeh Yazdi

Our evidence disputes this view. We spent 12 months rigorously characterizing nearly 2,000 biomedical articles from more than 200 journals thought likely to be predatory. More than half of the corresponding authors hailed from highand upper-middle-income countries as defined by the World Bank. Of the 17% of sampled articles that reported a funding source, the most frequently named funder was the US National Institutes of Health (NIH). The United States produced more articles in our sample than all other countries save India. Harvard University (with 9 articles) in Cambridge, Massachusetts, and the University of Texas (with Predatory journals are easy to please. They seem to accept papers with little regard for quality, at a fraction of the cost charged by mainstream openaccess journals. These supposedly scholarly publishing entities are murky operations, making money by collecting fees while failing to deliver on their claims of being open access and failing to provide services such as peer review and archiving. Despite abundant evidence that the bar is low, not much is known about who publishes in this shady realm, and what the papers are like. Common wisdom assumes that the hazard of predatory publishing is restricted mainly to the developing world. In one famous sting, a journalist for Science sent a purposely flawed paper to 140 presumed predatory titles (and to a roughly equal number of other open-access titles), pretending to be a biologist based in African capital cities. At least two earlier, smaller surveys found that most authors were in India or elsewhere in Asia. A campaign to warn scholars about predatory journals has concentrated its efforts in Africa, China, India, the Middle East and Russia. Frequent, aggressive solicitations from predatory publishers are generally considered merely a nuisance for scientists from rich countries, not a threat to scholarly integrity. Stop this waste of people, animals and money


Systematic Reviews | 2012

Does journal endorsement of reporting guidelines influence the completeness of reporting of health research? A systematic review protocol

Larissa Shamseer; Adrienne Stevens; Becky Skidmore; Lucy Turner; Douglas G. Altman; Allison Hirst; John Hoey; Anita Palepu; Iveta Simera; Kenneth F. Schulz; David Moher

BackgroundReporting of health research is often inadequate and incomplete. Complete and transparent reporting is imperative to enable readers to assess the validity of research findings for use in healthcare and policy decision-making. To this end, many guidelines, aimed at improving the quality of health research reports, have been developed for reporting a variety of research types. Despite efforts, many reporting guidelines are underused. In order to increase their uptake, evidence of their effectiveness is important and will provide authors, peer reviewers and editors with an important resource for use and implementation of pertinent guidance. The objective of this study was to assess whether endorsement of reporting guidelines by journals influences the completeness of reporting of health studies.MethodsGuidelines providing a minimum set of items to guide authors in reporting a specific type of research, developed with explicit methodology, and using a consensus process will be identified from an earlier systematic review and from the EQUATOR (Enhancing the QUAlity and Transparency Of health Research) Network’s reporting guidelines library. MEDLINE, EMBASE, the Cochrane Methodology Register and Scopus will be searched for evaluations of those reporting guidelines; relevant evaluations from the recently conducted CONSORT systematic review will also be included. Single data extraction with 10% verification of study characteristics, 20% of outcomes and complete verification of aspects of study validity will be carried out. We will include evaluations of reporting guidelines that assess the completeness of reporting: (1) before and after journal endorsement, and/or (2) between endorsing and non-endorsing journals. For a given guideline, analyses will be conducted for individual and the total sum of items. When possible, standard, pooled effects with 99% confidence intervals using random effects models will be calculated.DiscussionEvidence on which guidelines have been evaluated and which are associated with improved completeness of reporting is important for various stakeholders, including editors who consider which guidelines to endorse in their journal editorial policies.


Systematic Reviews | 2017

Systematic review adherence to methodological or reporting quality

Kusala Pussegoda; Lucy Turner; Chantelle Garritty; Alain Mayhew; Becky Skidmore; Adrienne Stevens; Isabelle Boutron; Rafael Sarkis-Onofre; Lise M. Bjerre; Asbjørn Hróbjartsson; Douglas G. Altman; David Moher

BackgroundGuidelines for assessing methodological and reporting quality of systematic reviews (SRs) were developed to contribute to implementing evidence-based health care and the reduction of research waste. As SRs assessing a cohort of SRs is becoming more prevalent in the literature and with the increased uptake of SR evidence for decision-making, methodological quality and standard of reporting of SRs is of interest. The objective of this study is to evaluate SR adherence to the Quality of Reporting of Meta-analyses (QUOROM) and PRISMA reporting guidelines and the A Measurement Tool to Assess Systematic Reviews (AMSTAR) and Overview Quality Assessment Questionnaire (OQAQ) quality assessment tools as evaluated in methodological overviews.MethodsThe Cochrane Library, MEDLINE®, and EMBASE® databases were searched from January 1990 to October 2014. Title and abstract screening and full-text screening were conducted independently by two reviewers. Reports assessing the quality or reporting of a cohort of SRs of interventions using PRISMA, QUOROM, OQAQ, or AMSTAR were included. All results are reported as frequencies and percentages of reports and SRs respectively.ResultsOf the 20,765 independent records retrieved from electronic searching, 1189 reports were reviewed for eligibility at full text, of which 56 reports (5371 SRs in total) evaluating the PRISMA, QUOROM, AMSTAR, and/or OQAQ tools were included. Notable items include the following: of the SRs using PRISMA, over 85% (1532/1741) provided a rationale for the review and less than 6% (102/1741) provided protocol information. For reports using QUOROM, only 9% (40/449) of SRs provided a trial flow diagram. However, 90% (402/449) described the explicit clinical problem and review rationale in the introduction section. Of reports using AMSTAR, 30% (534/1794) used duplicate study selection and data extraction. Conversely, 80% (1439/1794) of SRs provided study characteristics of included studies. In terms of OQAQ, 37% (499/1367) of the SRs assessed risk of bias (validity) in the included studies, while 80% (1112/1387) reported the criteria for study selection.ConclusionsAlthough reporting guidelines and quality assessment tools exist, reporting and methodological quality of SRs are inconsistent. Mechanisms to improve adherence to established reporting guidelines and methodological assessment tools are needed to improve the quality of SRs.


Systematic Reviews | 2017

Identifying approaches for assessing methodological and reporting quality of systematic reviews: a descriptive study

Kusala Pussegoda; Lucy Turner; Chantelle Garritty; Alain Mayhew; Becky Skidmore; Adrienne Stevens; Isabelle Boutron; Rafael Sarkis-Onofre; Lise M. Bjerre; Asbjørn Hróbjartsson; Douglas G. Altman; David Moher

BackgroundThe methodological quality and completeness of reporting of the systematic reviews (SRs) is fundamental to optimal implementation of evidence-based health care and the reduction of research waste. Methods exist to appraise SRs yet little is known about how they are used in SRs or where there are potential gaps in research best-practice guidance materials.The aims of this study are to identify reports assessing the methodological quality (MQ) and/or reporting quality (RQ) of a cohort of SRs and to assess their number, general characteristics, and approaches to ‘quality’ assessment over time.MethodsThe Cochrane Library, MEDLINE®, and EMBASE® were searched from January 1990 to October 16, 2014, for reports assessing MQ and/or RQ of SRs. Title, abstract, and full-text screening of all reports were conducted independently by two reviewers. Reports assessing the MQ and/or RQ of a cohort of ten or more SRs of interventions were included. All results are reported as frequencies and percentages of reports.ResultsOf 20,765 unique records retrieved, 1189 of them were reviewed for full-text review, of which 76 reports were included. Eight previously published approaches to assessing MQ or reporting guidelines used as proxy to assess RQ were used in 80% (61/76) of identified reports. These included two reporting guidelines (PRISMA and QUOROM) and five quality assessment tools (AMSTAR, R-AMSTAR, OQAQ, Mulrow, Sacks) and GRADE criteria. The remaining 24% (18/76) of reports developed their own criteria. PRISMA, OQAQ, and AMSTAR were the most commonly used published tools to assess MQ or RQ. In conjunction with other approaches, published tools were used in 29% (22/76) of reports, with 36% (8/22) assessing adherence to both PRISMA and AMSTAR criteria and 26% (6/22) using QUOROM and OQAQ.ConclusionsThe methods used to assess quality of SRs are diverse, and none has become universally accepted. The most commonly used quality assessment tools are AMSTAR, OQAQ, and PRISMA. As new tools and guidelines are developed to improve both the MQ and RQ of SRs, authors of methodological studies are encouraged to put thoughtful consideration into the use of appropriate tools to assess quality and reporting.

Collaboration


Dive into the Adrienne Stevens's collaboration.

Top Co-Authors

Avatar

David Moher

Ottawa Hospital Research Institute

View shared research outputs
Top Co-Authors

Avatar

Chantelle Garritty

Ottawa Hospital Research Institute

View shared research outputs
Top Co-Authors

Avatar

Brian Hutton

Ottawa Hospital Research Institute

View shared research outputs
Top Co-Authors

Avatar

Kusala Pussegoda

Ottawa Hospital Research Institute

View shared research outputs
Top Co-Authors

Avatar

Mohammed T Ansari

Ottawa Hospital Research Institute

View shared research outputs
Top Co-Authors

Avatar

Susan L. Norris

World Health Organization

View shared research outputs
Top Co-Authors

Avatar

Candyce Hamel

Ottawa Hospital Research Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge