Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Drummond Rennie is active.

Publication


Featured researches published by Drummond Rennie.


The Lancet | 1999

Improving the quality of reports of meta-analyses of randomised controlled trials: the QUOROM statement

David Moher; Deborah J. Cook; Susan Eastwood; Ingram Olkin; Drummond Rennie; Donna F. Stroup

BACKGROUND: The Quality of Reporting of Meta-analyses (QUOROM) conference was convened to address standards for improving the quality of reporting of meta-analyses of clinical randomised controlled trials (RCTs). METHODS: The QUOROM group consisted of 30 clinical epidemiologists, clinicians, statisticians, editors, and researchers. In conference, the group was asked to identify items they thought should be included in a checklist of standards. Whenever possible, checklist items were guided by research evidence suggesting that failure to adhere to the item proposed could lead to biased results. A modified Delphi technique was used in assessing candidate items. FINDINGS: The conference resulted in the QUOROM statement, a checklist, and a flow diagram. The checklist describes our preferred way to present the abstract, introduction, methods, results, and discussion sections of a report of a meta-analysis. It is organised into 21 headings and subheadings regarding searches, selection, validity assessment, data abstraction, study characteristics, and quantitative data synthesis, and in the results with <<trial flow>>, study characteristics, and quantitative data synthesis; research documentation was identified for eight of the 18 items. The flow diagram provides information about both the numbers of RCTs identified, included, and excluded and the reasons for exclusion of trials. INTERPRETATION: We hope this report will generate further thought about ways to improve the quality of reports of meta-analyses of RCTs and that interested readers, reviewers, researchers, and editors will use the QUOROM statement and generate ideas for its improvement. Copyright 2000 S. Karger GmbH, Freiburg


Clinical Chemistry | 2003

The STARD Statement for Reporting Studies of Diagnostic Accuracy: Explanation and Elaboration

Patrick M. Bossuyt; Johannes B. Reitsma; David E. Bruns; Constantine Gatsonis; Paul Glasziou; Les Irwig; David Moher; Drummond Rennie; Henrica C.W. de Vet; Jeroen G. Lijmer

The quality of reporting of studies of diagnostic accuracy is less than optimal. Complete and accurate reporting is necessary to enable readers to assess the potential for bias in the study and to evaluate the generalizability of the results. A group of scientists and editors has developed the STARD (Standards for Reporting of Diagnostic Accuracy) statement to improve the reporting the quality of reporting of studies of diagnostic accuracy. The statement consists of a checklist of 25 items and flow diagram that authors can use to ensure that all relevant information is present. This explanatory document aims to facilitate the use, understanding, and dissemination of the checklist. The document contains a clarification of the meaning, rationale, and optimal use of each item on the checklist, as well as a short summary of the available evidence on bias and applicability. The STARD statement, checklist, flowchart, and this explanation and elaboration document should be useful resources to improve reporting of diagnostic accuracy studies. Complete and informative reporting can only lead to better decisions in health care.


Annals of Internal Medicine | 2003

The STARD statement for reporting studies of diagnostic accuracy: explanation and elaboration.

Patrick M. Bossuyt; Johannes B. Reitsma; David E. Bruns; Constantine Gatsonis; Paul Glasziou; Les Irwig; David Moher; Drummond Rennie; Henrica C.W. de Vet; Jeroen G. Lijmer

Introduction In studies of diagnostic accuracy, results from one or more tests are compared with the results obtained with the reference standard on the same subjects. Such accuracy studies are a vital step in the evaluation of new and existing diagnostic technologies (1, 2). Several factors threaten the internal and external validity of a study of diagnostic accuracy (3-8). Some of these factors have to do with the design of such studies, others with the selection of patients, the execution of the tests, or the analysis of the data. In a study involving several meta-analyses a number of design deficiencies were shown to be related to overly optimistic estimates of diagnostic accuracy (9). Exaggerated results from poorly designed studies can trigger premature adoption of diagnostic tests and can mislead physicians to incorrect decisions about the care for individual patients. Reviewers and other readers of diagnostic studies must therefore be aware of the potential for bias and a possible lack of applicability. A survey of studies of diagnostic accuracy published in four major medical journals between 1978 and 1993 revealed that the methodological quality was mediocre at best (8). Furthermore, this review showed that information on key elements of design, conduct, and analysis of diagnostic studies was often not reported (8). To improve the quality of reporting of studies of diagnostic accuracy the Standards for Reporting of Diagnostic Accuracy (STARD) initiative was started. The objective of the STARD initiative is to improve the quality of reporting of studies of diagnostic accuracy. Complete and accurate reporting allows the reader to detect the potential for bias in the study and to judge the generalizability and applicability of the results. For this purpose, the STARD project group has developed a single-page checklist. Where possible, the decision to include items in the checklist was based on evidence linking these items to bias, variability in results, or limitations of the applicability of results to other settings. The checklist can be used to verify that all essential elements are included in the report of a study. This explanatory document aims to facilitate the use, understanding, and dissemination of the checklist. The document contains a clarification of the meaning, rationale, and optimal use of each item on the checklist, as well as a short summary of the available evidence on bias and applicability. The first part of this document contains a summary of the design and terminology of diagnostic accuracy studies. The second part contains an item-by-item discussion with examples. Studies of Diagnostic Accuracy Studies of diagnostic accuracy have a common basic structure (10). One or more tests are evaluated, with the purpose of detecting or predicting a target condition. The target condition can refer to a particular disease, a disease stage, a health status, or any other identifiable condition within a patient, such as staging a disease already known to be present, or a health condition that should prompt clinical action, such as the initiation, modification, or termination of treatment. Here test refers to any method for obtaining additional information on a patients health status. This includes laboratory tests, imaging tests, function tests, pathology, history, and physical examination. In a diagnostic accuracy study, the test under evaluationreferred to here as the index testis applied to a series of subjects. The results obtained with the index test are compared with the results of the reference standard, obtained in the same subjects. In this framework, the reference standard is the best available method for establishing the presence or absence of the target condition. The reference standard can be a single test, or a combination of methods and techniques, including clinical follow-up of tested subjects. The term accuracy refers to the amount of agreement between the results from the index test and those from the reference standard. Diagnostic accuracy can be expressed in a number of ways, including sensitivityspecificity pairs, likelihood ratios, diagnostic odds ratios, and areas under ROC [receiver-operating characteristic] curves (11, 12). Study Question, Design, and Potential for Bias Early in the evaluation of a test, the author may simply want to know if the test is able to discriminate. The appropriate early question may be Do the test results in patients with the target condition differ from the results in healthy people? If preliminary studies answer this question affirmatively, the next study question is, Are patients with specific test results more likely to have the target disorder than similar patients with other test results? The usual study design to answer this is to apply the index test and the reference standard to a number of patients who are suspected of the target condition. Some study designs are more prone to bias and have a more limited applicability than others. In this article, the term bias refers to difference between the observed measures of test performance and the true measures. No single design is guaranteed to be both feasible and able to provide valid, informative, and relevant answers with optimal precision to all study questions. For each study, the reader must judge the relevance, the potential for bias, and the limitations to applicability, making full and transparent reporting critical. For this reason, checklist items refer to the research question that prompted the study of diagnostic accuracy and ask for an explicit and complete description of the study design and results. Variability Measures of test accuracy may vary from study to study. Variability may reflect differences in patient groups, differences in setting, differences in definition of the target condition, and differences in test protocols or in criteria for test positivity (13). For example, bias may occur if a test is evaluated under circumstances that do not correspond to those of the research question. Examples are evaluating a screening test for early disease in patients with advanced stages of the disease and evaluating a physicians office test device in the specialty department of a university hospital. The checklist contains a number of items to make sure that a study report contains a clear description of the inclusion criteria for patients, the testing protocols and the criteria for positivity, as well as an adequate account of subjects included in the study and their results. These items will enable readers to judge if the study results apply to their circumstances. Items in the Checklist The next section contains a point-by-point discussion of the items on the checklist. The order of the items corresponds to the sequence used in many publications of diagnostic accuracy studies. Specific requirements made by journals could lead to a different order. Item 1. Identify the Article as a Study of Diagnostic Accuracy (Recommend MeSH Heading Sensitivity and Specificity) Example (an Excerpt from a Structured Abstract) Purpose: To determine the sensitivity and specificity of computed tomographic colonography for colorectal polyp and cancer detection by using colonoscopy as the reference standard (14). Electronic databases have become indispensable tools to identify studies. To facilitate retrieval of their study, authors should explicitly identify it as a report of a study of diagnostic accuracy. We recommend the use of the term diagnostic accuracy in the title or abstract of a report that compares the results of one or more index tests with the results of a reference standard. In 1991 the National Library of Medicines MEDLINE database introduced a specific keyword (MeSH heading) for diagnostic studies: Sensitivity and Specificity. Using this keyword to search for studies of diagnostic accuracy remains problematic (15-19). In a selected set of MEDLINE journals covering publications between 1992 through 1995, the use of the MeSH heading Sensitivity and Specificity identified only 51% of all studies of diagnostic accuracy and incorrectly identified many articles that were not reports of studies on diagnostic accuracy (18). In the example, the authors used the more general term Performance Characteristics of CT Colonography in the title. The purpose section of the structured abstract explicitly mentions sensitivity and specificity. The MEDLINE record for this paper contains the MeSH Sensitivity and Specificity. Item 2. State the Research Questions or Study Aims, Such as Estimating Diagnostic Accuracy or Comparing Accuracy between Tests or across Participant Groups Example Invasive x-ray coronary angiography remains the gold standard for the identification of clinically significant coronary artery disease . A noninvasive test would be desirable. Coronary magnetic resonance angiography performed while the patient is breathing freely has reached sufficient technical maturity to allow more widespread application with a standardized protocol. Therefore, we conducted a study to determine the [accuracy] of coronary magnetic resonance angiography in the diagnosis of native-vessel coronary artery disease (20). The Helsinki Declaration states that biomedical research involving people should be based on a thorough knowledge of the scientific literature (21). In the introduction of scientific reports authors describe the scientific background, previous work on the subject, the remaining uncertainty, and, hence, the rationale for their study. Clearly specified research questions help the readers to judge the appropriateness of the study design and data analysis. A single general description, such as diagnostic value or clinical usefulness, is usually not very helpful to the readers. In the example, the authors use the introduction section of their paper to describe the potential of coronary magnetic resonance angiography as a non-invasive alternative to conventional x-ray angiography in the diagn


Onkologie | 2000

Improving the Quality of Reports of Meta-Analyses of Randomised Controlled Trials: The QUOROM Statement

David Moher; Deborah J. Cook; S. Eastwood; Ingram Olkin; Drummond Rennie; Donna F. Stroup

Background: The Quality of Reporting of Meta-analyses (QUOROM) conference was convened to address standards for improving the quality of reporting of meta-analyses of clinical randomised controlled trials (RCTs). Methods: The QUOROM group consisted of 30 clinical epidemiologists, clinicians, statisticians, editors, and researchers. In conference, the group was asked to identify items they thought should be included in a checklist of standards. Whenever possible, checklist items were guided by research evidence suggesting that failure to adhere to the item proposed could lead to biased results. A modified Delphi technique was used in assessing candidate items. Findings: The conference resulted in the QUOROM statement, a checklist, and a flow diagram. The checklist describes our preferred way to present the abstract, introduction, methods, results, and discussion sections of a report of a meta-analysis. It is organised into 21 headings and subheadings regarding searches, selection, validity assessment, data abstraction, study characteristics, and quantitative data synthesis, and in the results with «trial flow», study characteristics, and quantitative data synthesis; research documentation was identified for eight of the 18 items. The flow diagram provides information about both the numbers of RCTs identified, included, and excluded and the reasons for exclusion of trials. Interpretation: We hope this report will generate further thought about ways to improve the quality of reports of meta-analyses of RCTs and that interested readers, reviewers, researchers, and editors will use the QUOROM statement and generate ideas for its improvement.


Clinical Chemistry and Laboratory Medicine | 2003

Towards complete and accurate reporting of studies of diagnostic accuracy: the STARD initiative.

Patrick M. Bossuyt; Johannes B. Reitsma; David E. Bruns; Constantine Gatsonis; Paul Glasziou; Les Irwig; Jeroen G. Lijmer; David Moher; Drummond Rennie; Henrica C.W. de Vet

Abstract Objective – To improve the accuracy and completeness of reporting of studies of diagnostic accuracy, to allow readers to assess the potential for bias in the study and to evaluate its generalisability. Methods – The Standards for Reporting of Diagnostic Accuracy (STARD) steering committee searched the literature to identify publications on the appropriate conduct and reporting of diagnostic studies and extracted potential items into an extensive list. Researchers, editors, and members of professional organisations shortened this list during a two-day consensus meeting with the goal of developing a checklist and a generic flow diagram for studies of diagnostic accuracy. Results – The search for published guidelines regarding diagnostic research yielded 33 previously published checklists, from which we extracted a list of 75 potential items. At the consensus meeting, participants shortened the list to 25 items, using evidence on bias whenever available. A prototypical flow diagram provides information about the method of patient recruitment, the order of test execution and the numbers of patients undergoing the test under evaluation, the reference standard or both. Conclusions – Evaluation of research depends on complete and accurate reporting. If medical journals adopt the checklist and the flow diagram, the quality of reporting of studies of diagnostic accuracy should improve to the advantage of clinicians, researchers, reviewers, journals, and the public.


Academic Radiology | 2003

Toward Complete and Accurate Reporting of Studies of Diagnostic Accuracy: The STARD Initiative

Patrick M. Bossuyt; Johannes B. Reitsma; David E. Bruns; Constantine Gatsonis; Paul Glasziou; Les Irwig; Jeroen G. Lijmer; David Moher; Drummond Rennie; Henrica C.W. de Vet

RATIONALE AND OBJECTIVESnTo comprehend the results of diagnostic accuracy studies, readers must understand the design, conduct, and analysis of such studies. The authors sought to develop guidelines for improving the accuracy and completeness of reporting of studies of diagnostic accuracy in order to allow readers better to assess the validity and generalizability of study results.nnnMATERIALS AND METHODSnThe Standards for Reporting of Diagnostic Accuracy group steering committee searched the literature to identify publications on the appropriate conduct and reporting of diagnostic studies and to extract potential guidelines for authors and editors. An extensive list of items was prepared. Members of the steering committee then met for 2 days with other researchers, editors, methodologists, statisticians, and members of professional organizations to develop a checklist and a prototypical flowchart to guide authors and editors of studies of diagnostic accuracy.nnnRESULTSnThe search for published guidelines on diagnostic research yielded 33 previously published checklists, from which the group produced an initial list of 75 items. This list was honed to 25 key items by group consensus and on the basis of published research on bias. A prototypical flowchart was developed as a tool for conveying information about the method of patient recruitment, the order of test execution, and the numbers of patients undergoing the test under evaluation, the reference test, or both. Potential users reviewed the conference version of the checklist and flowchart and provided additional suggestions, which were then incorporated.nnnCONCLUSIONnUse of these carefully developed, consensus-based guidelines should enable clearer and more complete reporting of studies of diagnostic accuracy, as well as better reader understanding of the validity and generalizability of study results.


JAMA | 2000

Meta-analysis of Observational Studies in Epidemiology: A Proposal for Reporting

Donna F. Stroup; Jesse A. Berlin; Sally C. Morton; Ingram Olkin; Gd Williamson; Drummond Rennie; David Moher; Betsy Jane Becker; Ta Sipe; Stephen B. Thacker


Archive | 2000

Meta-analysis of Observational Studies in Epidemiology

Donna F. Stroup; Jesse A. Berlin; Sally C. Morton; Ingram Olkin; G. David Williamson; Drummond Rennie; David Moher; Betsy Jane Becker; Theresa Ann Sipe; Stephen B. Thacker


JAMA | 1997

When Authorship Fails: A Proposal to Make Contributors Accountable

Drummond Rennie; Veronica Yank; Linda L. Emanuel


JAMA | 2003

Registering Clinical Trials

Kay Dickersin; Drummond Rennie

Collaboration


Dive into the Drummond Rennie's collaboration.

Top Co-Authors

Avatar

David Moher

Ottawa Hospital Research Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Donna F. Stroup

Centers for Disease Control and Prevention

View shared research outputs
Researchain Logo
Decentralizing Knowledge