Cole Wayant
Oklahoma State University Center for Health Sciences
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Cole Wayant.
PLOS ONE | 2017
Cole Wayant; Caleb Scheckel; Chandler Hicks; Timothy Nissen; Linda Leduc; Mousumi Som; Matt Vassar
Introduction Selective reporting bias occurs when chance or selective outcome reporting rather than the intervention contributes to group differences. The prevailing concern about selective reporting bias is the possibility of results being modified towards specific conclusions. In this study, we evaluate randomized controlled trials (RCTs) published in hematology journals, a group in which selective outcome reporting has not yet been explored. Methods Our primary goal was to examine discrepancies between the reported primary and secondary outcomes in registered and published RCTs concerning hematological malignancies reported in hematology journals with a high impact factor. The secondary goals were to address whether outcome reporting discrepancies favored statistically significant outcomes, whether a pattern existed between the funding source and likelihood of outcome reporting bias, and whether temporal trends were present in outcome reporting bias. For trials with major outcome discrepancies, we contacted trialists to determine reasons for these discrepancies. Trials published between January 1, 2010 and December 31, 2015 in Blood; British Journal of Haematology; American Journal of Hematology; Leukemia; and Haematologica were included. Results Of 499 RCTs screened, 109 RCTs were included. Our analysis revealed 118 major discrepancies and 629 total discrepancies. Among the 118 discrepancies, 30 (25.4%) primary outcomes were demoted, 47 (39.8%) primary outcomes were omitted, and 30 (25.4%) primary outcomes were added. Three (2.5%) secondary outcomes were upgraded to a primary outcome. The timing of assessment for a primary outcome changed eight (6.8%) times. Thirty-one major discrepancies were published with a P-value and twenty-five (80.6%) favored statistical significance. A majority of authors whom we contacted cited a pre-planned subgroup analysis as a reason for outcome changes. Conclusion Our results suggest that outcome changes occur frequently in hematology trials. Because RCTs ultimately underpin clinical judgment and guide policy implementation, selective reporting could pose a threat to medical decision making.
Scandinavian Journal of Trauma, Resuscitation and Emergency Medicine | 2016
Matthew T. Sims; Nolan Michael Henning; Cole Wayant; Matt Vassar
BackgroundThe aim of this study was to evaluate the current state of two publication practices, reporting guidelines requirements and clinical trial registration requirements, by analyzing the “Instructions for Authors” of emergency medicine journals.MethodsWe performed a web-based data abstraction from the “Instructions for Authors” of the 27 Emergency Medicine journals catalogued in the Expanded Science Citation Index of the 2014 Journal Citation Reports and Google Scholar Metrics h5-index to identify whether each journal required, recommended, or made no mention of the following reporting guidelines: EQUATOR Network, ICMJE, ARRIVE, CARE, CONSORT, STARD, TRIPOD, CHEERS, MOOSE, STROBE, COREQ, SRQR, SQUIRE, PRISMA-P, SPIRIT, PRISMA, and QUOROM. We also extracted whether journals required or recommended trial registration. Authors were blinded to one another’s ratings until completion of the data validation. Cross-tabulations and descriptive statistics were calculated using IBM SPSS 22.ResultsOf the 27 emergency medicine journals, 11 (11/27, 40.7%) did not mention a single guideline within their “Instructions for Authors,” while the remaining 16 (16/27, 59.3%) mentioned one or more guidelines. The QUOROM statement and SRQR were not mentioned by any journals whereas the ICMJE guidelines (18/27, 66.7%) and CONSORT statement (15/27, 55.6%) were mentioned most often. Of the 27 emergency medicine journals, 15 (15/27, 55.6%) did not mention trial or review registration, while the remaining 12 (12/27, 44.4%) at least mentioned one of the two. Trial registration through ClinicalTrials.gov was mentioned by seven (7/27, 25.9%) journals while the WHO registry was mentioned by four (4/27, 14.8%). Twelve (12/27, 44.4%) journals mentioned trial registration through any registry platform.DiscussionThe aim of this study was to evaluate the current state of two publication practices, reporting guidelines requirements and clinical trial registration requirements, by analyzing the “Instructions for Authors” of emergency medicine journals. In this study, there was not a single reporting guideline mentioned in more than half of the journals. This undermines efforts of other journals to improve the completeness and transparency of research reporting.ConclusionsReporting guidelines are infrequently required or recommended by emergency medicine journals. Furthermore, few require clinical trial registration. These two mechanisms may limit bias and should be considered for adoption by journal editors in emergency medicine.Trial registrationUMIN000022486
Journal of Thrombosis and Haemostasis | 2017
Cole Wayant; Caleb Smith; Matt Thomas Sims; Matt Vassar
Essentials Reporting guidelines and trial/review registration aim to limit bias in research. We systematically reviewed hematology journals to examine the use of these policies. Forty‐eight percent of journals made no use of these policies. Improving the use of reporting guidelines will improve research for all stakeholders.
Clinical obesity | 2017
T. Nissen; Cole Wayant; A. Wahlstrom; P. Sinnett; C. Fugate; J. Herrington; Matt Vassar
Paediatric obesity rates remain high despite extensive efforts to prevent and treat obesity in children. We investigated the quality of the methodology and reporting within systematic reviews (SRs) underpinning paediatric content in US clinical practice guidelines (CPGs). In June 2016 we searched guideline clearinghouses and professional organization websites for guidelines published by national or professional organizations in the United States from January 2007 onwards. In our primary, a priori analysis, we used PRISMA (Preferred Reporting Items for Systematic Reviews and Meta‐Analyses) and AMSTAR (A Measurement Tool to Assess Systematic Reviews) instruments to score SRs and meta‐analyses that included paediatric populations and were cited by included CPGs. In a secondary, post hoc analysis, we determined the extent to which US CPGs use available, relevant SRs and meta‐analyses compared with non‐US CPGs. Eight US‐based CPGs with 27 references to 22 unique SRs were found. AMSTAR and PRISMA scores were low overall, with only three SRs having ‘high’ methodological quality. Items dealing with bias assessments and search strategies had especially low scores. US CPGs were also older on average and cited fewer SRs than their international counterparts. Low quality scores and dated guidelines should be a cause for concern among practicing clinicians and a call to action for future guideline developers, publishers and research institutions.
European Urology | 2018
Austin Carlisle; Aaron Bowers; Cole Wayant; Chase Meyer; Matt Vassar
BACKGROUND Recent studies have highlighted the presence of disclosed and undisclosed financial conflicts of interest among authors of clinical practice guidelines. OBJECTIVE We sought to determine to what extent urology guideline authors receive and report industry payments in accordance with the Physician Payment Sunshine Act. DESIGN, SETTING, AND PARTICIPANTS We selected the 13 urology guidelines that were published by the American Urological Association (AUA) after disclosure was mandated by the Physician Payment Sunshine Act. Payments received by guideline authors were searched independently by two investigators using the Open Payments database. OUTCOME MEASURES AND STATISTICAL ANALYSIS Our primary outcome measure was the number of authors receiving payments from industry, stratified by amount thresholds. Our secondary outcome measure was the number of authors with accurate conflict of interest disclosure statements. RESULTS AND LIMITATIONS We identified a total of 54 author disclosures. Thirty-two authors (59.3%) received at least one payment from industry. Twenty (37.0%) received >
BMJ Evidence-Based Medicine | 2018
Matt Vassar; Michael Bibens; Cole Wayant
10 000 and six (11.1%) received >
bioRxiv | 2018
Cole Wayant; Craig M. Cooper; D'Arcy Turner; Matt Vassar
50 000. Median total payments were
PLOS ONE | 2018
Chase Meyer; Aaron Bowers; Cole Wayant; Jake X. Checketts; Jared Scott; Sanjeev Musuvathy; Matt Vassar
578 (interquartile range
Journal of Clinical Gastroenterology and Hepatology | 2018
Chris Chapman; Benjamin M. Howard; Cole Wayant; Matt Vassar
0-19 228). Twenty (37.0%) disclosure statements were inaccurate. Via Dollars for Docs, we identified
Chest | 2018
Elizabeth Edwards; Cole Wayant; Jonathan Besas; Justin Chronister; Matt Vassar
74 195.13 paid for drugs and devices directly related to guideline recommendations. We were limited in our ability to determine when authors began working on guideline panels, as this information was not provided, and by the lack of specificity in Dollars for Docs. CONCLUSIONS Many of the AUA guideline authors received payments from industry, some in excess of