Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Trevor A. McGrath is active.

Publication


Featured researches published by Trevor A. McGrath.


JAMA | 2018

Preferred Reporting Items for a Systematic Review and Meta-analysis of Diagnostic Test Accuracy Studies: The PRISMA-DTA Statement

Matthew D. F. McInnes; David Moher; Brett D. Thombs; Trevor A. McGrath; Patrick M. Bossuyt; Tammy Clifford; Jérémie F. Cohen; Jonathan J Deeks; Constantine Gatsonis; Lotty Hooft; Harriet Hunt; Chris Hyde; Daniël A. Korevaar; Mariska M.G. Leeflang; Petra Macaskill; Johannes B. Reitsma; Rachel Rodin; Anne Ws Rutjes; Jean Paul Salameh; Adrienne Stevens; Yemisi Takwoingi; Marcello Tonelli; Laura Weeks; Penny F Whiting; Brian H. Willis

Importance Systematic reviews of diagnostic test accuracy synthesize data from primary diagnostic studies that have evaluated the accuracy of 1 or more index tests against a reference standard, provide estimates of test performance, allow comparisons of the accuracy of different tests, and facilitate the identification of sources of variability in test accuracy. Objective To develop the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) diagnostic test accuracy guideline as a stand-alone extension of the PRISMA statement. Modifications to the PRISMA statement reflect the specific requirements for reporting of systematic reviews and meta-analyses of diagnostic test accuracy studies and the abstracts for these reviews. Design Established standards from the Enhancing the Quality and Transparency of Health Research (EQUATOR) Network were followed for the development of the guideline. The original PRISMA statement was used as a framework on which to modify and add items. A group of 24 multidisciplinary experts used a systematic review of articles on existing reporting guidelines and methods, a 3-round Delphi process, a consensus meeting, pilot testing, and iterative refinement to develop the PRISMA diagnostic test accuracy guideline. The final version of the PRISMA diagnostic test accuracy guideline checklist was approved by the group. Findings The systematic review (produced 64 items) and the Delphi process (provided feedback on 7 proposed items; 1 item was later split into 2 items) identified 71 potentially relevant items for consideration. The Delphi process reduced these to 60 items that were discussed at the consensus meeting. Following the meeting, pilot testing and iterative feedback were used to generate the 27-item PRISMA diagnostic test accuracy checklist. To reflect specific or optimal contemporary systematic review methods for diagnostic test accuracy, 8 of the 27 original PRISMA items were left unchanged, 17 were modified, 2 were added, and 2 were omitted. Conclusions and Relevance The 27-item PRISMA diagnostic test accuracy checklist provides specific guidance for reporting of systematic reviews. The PRISMA diagnostic test accuracy guideline can facilitate the transparent reporting of reviews, and may assist in the evaluation of validity and applicability, enhance replicability of reviews, and make the results from systematic reviews of diagnostic test accuracy studies more useful.


Radiology | 2016

Meta-Analyses of Diagnostic Accuracy in Imaging Journals: Analysis of Pooling Techniques and Their Effect on Summary Estimates of Diagnostic Accuracy.

Trevor A. McGrath; Matthew D. F. McInnes; Daniël A. Korevaar; Patrick M. Bossuyt

Purpose To determine whether authors of systematic reviews of diagnostic accuracy studies published in imaging journals used recommended methods for meta-analysis, and to evaluate the effect of traditional methods on summary estimates of sensitivity and specificity. Materials and Methods Medline was searched for published systematic reviews that included meta-analysis of test accuracy data limited to imaging journals published from January 2005 to May 2015. Two reviewers independently extracted study data and classified methods for meta-analysis as traditional (univariate fixed- or random-effects pooling or summary receiver operating characteristic curve) or recommended (bivariate model or hierarchic summary receiver operating characteristic curve). Use of methods was analyzed for variation with time, geographical location, subspecialty, and journal. Results from reviews in which study authors used traditional univariate pooling methods were recalculated with a bivariate model. Results Three hundred reviews met the inclusion criteria, and in 118 (39%) of those, authors used recommended meta-analysis methods. No change in the method used was observed with time (r = 0.54, P = .09); however, there was geographic (χ(2) = 15.7, P = .001), subspecialty (χ(2) = 46.7, P < .001), and journal (χ(2) = 27.6, P < .001) heterogeneity. Fifty-one univariate random-effects meta-analyses were reanalyzed with the bivariate model; the average change in the summary estimate was -1.4% (P < .001) for sensitivity and -2.5% (P < .001) for specificity. The average change in width of the confidence interval was 7.7% (P < .001) for sensitivity and 9.9% (P ≤ .001) for specificity. Conclusion Recommended methods for meta-analysis of diagnostic accuracy in imaging journals are used in a minority of reviews; this has not changed significantly with time. Traditional (univariate) methods allow overestimation of diagnostic accuracy and provide narrower confidence intervals than do recommended (bivariate) methods. (©) RSNA, 2016 Online supplemental material is available for this article.


Nature | 2017

Stop this waste of people, animals and money

David Moher; Larissa Shamseer; Kelly D. Cobey; Manoj M. Lalu; James Galipeau; Marc T. Avey; Nadera Ahmadzai; Mostafa Alabousi; Pauline Barbeau; Andrew Beck; Raymond Daniel; Robert Frank; Mona Ghannad; Candyce Hamel; Mona Hersi; Brian Hutton; Inga Isupov; Trevor A. McGrath; Matthew D. F. McInnes; Matthew J. Page; Misty Pratt; Kusala Pussegoda; Beverley Shea; Anubhav Srivastava; Adrienne Stevens; Kednapa Thavorn; Sasha van Katwyk; Roxanne Ward; Dianna Wolfe; Fatemeh Yazdi

Our evidence disputes this view. We spent 12 months rigorously characterizing nearly 2,000 biomedical articles from more than 200 journals thought likely to be predatory. More than half of the corresponding authors hailed from highand upper-middle-income countries as defined by the World Bank. Of the 17% of sampled articles that reported a funding source, the most frequently named funder was the US National Institutes of Health (NIH). The United States produced more articles in our sample than all other countries save India. Harvard University (with 9 articles) in Cambridge, Massachusetts, and the University of Texas (with Predatory journals are easy to please. They seem to accept papers with little regard for quality, at a fraction of the cost charged by mainstream openaccess journals. These supposedly scholarly publishing entities are murky operations, making money by collecting fees while failing to deliver on their claims of being open access and failing to provide services such as peer review and archiving. Despite abundant evidence that the bar is low, not much is known about who publishes in this shady realm, and what the papers are like. Common wisdom assumes that the hazard of predatory publishing is restricted mainly to the developing world. In one famous sting, a journalist for Science sent a purposely flawed paper to 140 presumed predatory titles (and to a roughly equal number of other open-access titles), pretending to be a biologist based in African capital cities. At least two earlier, smaller surveys found that most authors were in India or elsewhere in Asia. A campaign to warn scholars about predatory journals has concentrated its efforts in Africa, China, India, the Middle East and Russia. Frequent, aggressive solicitations from predatory publishers are generally considered merely a nuisance for scientists from rich countries, not a threat to scholarly integrity. Stop this waste of people, animals and money


Systematic Reviews | 2017

Recommendations for reporting of systematic reviews and meta-analyses of diagnostic test accuracy: a systematic review

Trevor A. McGrath; Mostafa Alabousi; Becky Skidmore; Daniël A. Korevaar; Patrick M. Bossuyt; David Moher; Brett D. Thombs; Matthew D. F. McInnes

BackgroundThis study is to perform a systematic review of existing guidance on quality of reporting and methodology for systematic reviews of diagnostic test accuracy (DTA) in order to compile a list of potential items that might be included in a reporting guideline for such reviews: Preferred Reporting Items for Systematic Reviews and Meta-Analyses of Diagnostic Test Accuracy (PRISMA-DTA).MethodsStudy protocol published on EQUATOR website. Articles in full text or abstract form that reported on any aspect of reporting systematic reviews of diagnostic test accuracy were eligible for inclusion. We used the Ovid platform to search Ovid MEDLINE®, Ovid MEDLINE® In-Process & Other Non-Indexed Citations and Embase Classic+Embase through May 5, 2016. The Cochrane Methodology Register in the Cochrane Library (Wiley version) was also searched. Title and abstract screening followed by full-text screening of all search results was performed independently by two investigators. Guideline organization websites, published guidance statements, and the Cochrane Handbook for Diagnostic Test Accuracy were also searched. Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) and Standards for Reporting Diagnostic Accuracy (STARD) were assessed independently by two investigators for relevant items.ResultsThe literature searched yielded 6967 results; 386 were included after title and abstract screening and 203 after full-text screening. After reviewing the existing literature and guidance documents, a preliminary list of 64 items was compiled into the following categories: title (three items); introduction (two items); methods (35 items); results (13 items); discussion (nine items), and disclosure (two items).ConclusionItems on the methods and reporting of DTA systematic reviews in the present systematic review will provide a basis for generating a PRISMA extension for DTA systematic reviews.


BJUI | 2018

Diagnostic accuracy of magnetic resonance imaging for tumour staging of bladder cancer: systematic review and meta-analysis

Niket Gandhi; Satheesh Krishna; Christopher M. Booth; Rodney H. Breau; Trevor A. Flood; Scott C. Morgan; Nicola Schieda; Jean-Paul Salameh; Trevor A. McGrath; Matthew D. F. McInnes

The purpose of this study is to evaluate accuracy of magnetic resonance imaging (MRI) for local staging of bladder cancer for four clinical scenarios (T‐stage thresholds) considered against current standards for clinical staging and secondarily to identify sources for variability in accuracy. Systematic review of patients with bladder cancer undergoing T‐staging MRI to evaluate the diagnostic accuracy using bivariate random‐effects meta‐analysis. Sub‐group analysis was done to explore variability; risk of bias was assessed using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS)‐2 tool. The search identified 30 studies (5156 patients). Pooled accuracy at multiple T‐stage thresholds: ≤T1 vs ≥T2 = sensitivity 87% (95% confidence interval [CI] 82–91), specificity 79% (95% CI 72–85); T‐any vs T0 = sensitivity 65% (95% CI 23–92), specificity 90% (95% CI 83–94); ≤T2 vs ≥T3 = sensitivity 83% (95% CI 75–88), specificity 87% (95% CI 78–93); and


Journal of Magnetic Resonance Imaging | 2018

Reporting of imaging diagnostic accuracy studies with focus on MRI subgroup: Adherence to STARD 2015

Patrick Jiho Hong; Daniël A. Korevaar; Trevor A. McGrath; Hedyeh Ziai; Robert Frank; Mostafa Alabousi; Patrick M. Bossuyt; Matthew D. F. McInnes

To evaluate adherence of diagnostic accuracy studies in imaging journals to the STAndards for Reporting of Diagnostic accuracy studies (STARD) 2015. The secondary objective was to identify differences in reporting for magnetic resonance imaging (MRI) studies.


Radiology | 2017

Are Study and Journal Characteristics Reliable Indicators of "Truth" in Imaging Research?

Robert Frank; Matthew D. F. McInnes; Deborah Levine; Herbert Y. Kressel; Julia S. Jesurum; William Petrcich; Trevor A. McGrath; Patrick M. Bossuyt

Purpose To evaluate whether journal-level variables (impact factor, cited half-life, and Standards for Reporting of Diagnostic Accuracy Studies [STARD] endorsement) and study-level variables (citation rate, timing of publication, and order of publication) are associated with the distance between primary study results and summary estimates from meta-analyses. Materials and Methods MEDLINE was searched for meta-analyses of imaging diagnostic accuracy studies, published from January 2005 to April 2016. Data on journal-level and primary-study variables were extracted for each meta-analysis. Primary studies were dichotomized by variable as first versus subsequent publication, publication before versus after STARD introduction, STARD endorsement, or by median split. The mean absolute deviation of primary study estimates from the corresponding summary estimates for sensitivity and specificity was compared between groups. Means and confidence intervals were obtained by using bootstrap resampling; P values were calculated by using a t test. Results Ninety-eight meta-analyses summarizing 1458 primary studies met the inclusion criteria. There was substantial variability, but no significant differences, in deviations from the summary estimate between paired groups (P > .0041 in all comparisons). The largest difference found was in mean deviation for sensitivity, which was observed for publication timing, where studies published first on a topic demonstrated a mean deviation that was 2.5 percentage points smaller than subsequently published studies (P = .005). For journal-level factors, the greatest difference found (1.8 percentage points; P = .088) was in mean deviation for sensitivity in journals with impact factors above the median compared with those below the median. Conclusion Journal- and study-level variables considered important when evaluating diagnostic accuracy information to guide clinical decisions are not systematically associated with distance from the truth; critical appraisal of individual articles is recommended.


Journal of Magnetic Resonance Imaging | 2018

Best practices for MRI systematic reviews and meta-analyses: MRI Systematic Reviews and Meta-Analyses

Trevor A. McGrath; Patrick M. Bossuyt; Paul Cronin; Jean-Paul Salameh; Noémie Kraaijpoel; Nicola Schieda; Matthew D. F. McInnes

As defined by the Cochrane Collaboration, a systematic review is a review of evidence with a clearly formulated question that uses systematic and explicit methods to identify, select, and critically appraise relevant primary research, and to extract and analyze data from the studies that are included in the review. Meta‐analysis is a statistical method to combine the results from primary studies that accounts for sample size and variability to provide a summary measure of the studied outcome. Systematic reviews of diagnostic test accuracy present unique methodological and reporting challenges not present in systematic reviews of interventions. This review provides guidance and further resources highlighting current best practices in methodology and reporting of systematic reviews of diagnostic test accuracy, with a specific focus on challenges and opportunities for MRI imaging.


European Radiology | 2018

Epidemiology of systematic reviews in imaging journals: evaluation of publication trends and sustainability?

Mostafa Alabousi; A. Alabousi; Trevor A. McGrath; Kelly D. Cobey; B. Budhram; Robert Frank; F. Nguyen; J. P. Salameh; A. Dehmoobad Sharifabadi; Matthew D. F. McInnes

PurposeTo evaluate the epidemiology of systematic reviews (SRs) published in imaging journals.MethodsA MEDLINE search identified SRs published in imaging journals from 1 January 2000–31 December 2016. Articles retrieved were screened against inclusion criteria. Demographic and methodological characteristics were extracted from studies. Temporal trends were evaluated using linear regression and Pearson’s correlation coefficients.Results921 SRs were included that reported on 27,435 primary studies, 85,276,484 patients and were cited 26,961 times. The SR publication rate increased 23-fold (r=0.92, p<0.001) while the proportion of SRs to non-SRs increased 13-fold (r = 0.94, p<0.001) from 2000 (0.10%) to 2016 (1.33%). Diagnostic test accuracy (DTA) SRs were most frequent (46.5%) followed by therapeutic SRs (16.6%). Most SRs did not report funding status (54.2%). The median author team size was five; this increased over time (r=0.20, p<0.001). Of the studies, 67.3% included an imaging specialist co-author; this decreased over time (r=-0.57, p=0.017). Most SRs included a meta-analysis (69.6%). Journal impact factor positively correlated with SR publication rates (r=0.54, p<0.001). Magnetic resonance imaging (MRI) and ‘vascular and interventional radiology’ were the most frequently studied imaging modality and subspecialty, respectively. The USA, UK, China, Netherlands and Canada were the top five publishing countries.ConclusionsThe SR publication rate is increasing rapidly compared with the rate of growth of non-SRs; however, they still make up just over 1% of all studies. Authors, reviewers and editors should be aware of methodological and reporting standards specific to imaging systematic reviews including those for DTA and individual patient data.Key Points• Systematic review publication rate has increased 23-fold from 2000–2016.• The proportion of systematic reviews to non-systematic reviews has increased 13-fold.• The USA, UK and China are the most frequent published countries; those from the USA and China are increasing the most rapidly.


European Radiology | 2018

Reporting bias in imaging: higher accuracy is linked to faster publication

A. Dehmoobad Sharifabadi; D. A. Korevaar; Trevor A. McGrath; N. van Es; Robert Frank; L. Cherpak; W. Dang; J. P. Salameh; F. Nguyen; C. Stanley; Matthew D. F. McInnes

ObjectivesThe objective of this study was to evaluate whether higher reported accuracy estimates are associated with shorter time to publication among imaging diagnostic accuracy studies.MethodsWe included primary imaging diagnostic accuracy studies, included in meta-analyses from systematic reviews published in 2015. For each primary study, we extracted accuracy estimates, participant recruitment periods and publication dates. Our primary outcome was the association between Youden’s index (sensitivity + specificity − 1, a single measure of diagnostic accuracy) and time to publication.ResultsWe included 55 systematic reviews and 781 primary studies. Study completion dates were missing for 238 (30%) studies. The median time from completion to publication in the remaining 543 studies was 20 months (IQR 14–29). Youden’s index was negatively correlated with time from completion to publication (rho = −0.11, p = 0.009). This association remained significant in multivariable Cox regression analyses after adjusting for seven study characteristics: hazard ratio of publication was 1.09 (95% CI 1.03–1.16, p = 0.004) per unit increase for logit-transformed estimates of Youden’s index. When dichotomizing Youden’s index by a median split, time from completion to publication was 20 months (IQR 13–33) for studies with a Youden’s index below the median, and 19 months (14–27) for studies with a Youden’s index above the median (p = 0.104).ConclusionImaging diagnostic accuracy studies with higher accuracy estimates were weakly associated with a shorter time to publication.Key points• Higher accuracy estimates are weakly associated with shorter time to publication.• Lag in time to publication remained significant in multivariate Cox regression analyses.• No correlation between accuracy and time from submission to publication was identified.

Collaboration


Dive into the Trevor A. McGrath's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Moher

Ottawa Hospital Research Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adrienne Stevens

Ottawa Hospital Research Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge