Jared Scott
Oklahoma State University Center for Health Sciences
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jared Scott.
PLOS ONE | 2017
Benjamin M. Howard; Jared Scott; Mark Blubaugh; Brie Roepke; Caleb Scheckel; Matt Vassar
Background Selective outcome reporting is a significant methodological concern. Comparisons between the outcomes reported in clinical trial registrations and those later published allow investigators to understand the extent of selection bias among trialists. We examined the possibility of selective outcome reporting in randomized controlled trials (RCTs) published in neurology journals. Methods We searched PubMed for randomized controlled trials from Jan 1, 2010 –Dec 31, 2015 published in the top 3 impact factor neurology journals. These articles were screened according to specific inclusion criteria. Each author individually extracted data from trials following a standardized protocol. A second author verified each extracted element and discrepancies were resolved. Consistency between registered and published outcomes was evaluated and correlations between discrepancies and funding, journal, and temporal trends were examined. Results 180 trials were included for analysis. 10 (6%) primary outcomes were demoted, 38 (21%) primary outcomes were omitted from the publication, and 61 (34%) unregistered primary outcomes were added to the published report. There were 18 (10%) cases of secondary outcomes being upgraded to primary outcomes in the publication, and there were 53 (29%) changes in timing of assessment. Of 82 (46%) major discrepancies with reported p-values, 54 (66.0%) favored publication of statistically significant results. Conclusion Across trials, we found 180 major discrepancies. 66% of major discrepancies with a reported p-value (n = 82) favored statistically significant results. These results suggest a need within neurology to provide more consistent and timely registration of outcomes.
American Journal of Emergency Medicine | 2017
Jared Scott; Benjamin McKinnley Howard; Philip Marcus Sinnett; Michael Schiesel; Jana Baker; Patrick Henderson; Matt Vassar
Background The objective of this study was to assess the methodological quality and clarity of reporting of the systematic reviews (SRs) supporting clinical practice guideline (CPG) recommendations in the management of ST‐elevation myocardial infarction (STEMI) across international CPGs. Methods We searched 13 guideline clearinghouses including the National Guideline Clearinghouse and Guidelines International Network (GIN). To meet inclusion criteria CPGs must be pertinent to the management of STEMI, endorsed by a governing body or national organization, and written in English. We retrieved SRs from the reference sections using a combination of keywords and hand searching. Two investigators scored eligible SRs using AMSTAR and PRISMA. Results We included four CPGs. We extracted 71 unique SRs. These SRs received AMSTAR scores ranging from 1 (low) to 9 (high) on an 11‐point scale. All CPGs consistently underperformed in areas including disclosure of funding sources, risk of bias, and publication bias according to AMSTAR. PRISMA checklist completeness ranged from 44% to 96%. The PRISMA scores indicated that SRs did not provide a full search strategy, study protocol and registration, assessment of publication bias or report funding sources. Only one SR was referenced in all four CPGs. All CPGs omitted a large subset of available SRs cited by other guidelines. Conclusions Our study demonstrates the variable quality of SRs used to establish recommendations within guidelines included in our sample. Although guideline developers have acknowledged this variability, it remains a significant finding that needs to be addressed further. Funding This research did not receive any specific grant from funding agencies in the public, commercial, or not‐for‐profit sectors.
PeerJ | 2018
Daniel Tritz; Leomar Bautista; Jared Scott; Matt Vassar
Background Material presented at conferences is meant to provide exposure to ongoing research that could affect medical decision making based on future outcomes. It is important then to evaluate the rates of publication from conference presentations as a measure of academic quality as such research has undergone peer review and journal acceptance. The purpose of this study is to evaluate the fate of abstracts presented at the Skeletal Society of Radiology Annual Meetings from 2010–2015. Materials and Methods Conference abstracts were searched using Google, Google Scholar, and PubMed (which includes Medline) to locate the corresponding published reports. The data recorded for published studies included date published online, in print, or both; the journal in which it was published; and the 5-year journal impact factor. When an abstract was not confirmed as published, authors were contacted by email to verify its publication status or, if not published, the reason for nonpublication. Results A total of 162 abstracts were published out of 320 presented (50.6%) at the SSR conferences from 2010 to 2015 with 59.9% (85/142) of publications occurring within two years of the conference date (not counting abstracts published prior to conference). Mean time to publication was 19 months and is calculated by excluding the 20 (12.3%) abstracts that were published prior to the conference date. The median time to publication is 13 months (25th–75th percentile: 6.25–21.75). The top two journals publishing research studies from this conference were Skeletal Radiology and The American Journal of Roentgenology. These journals accepted 72 of the 162 (44.4%) studies for publication. Of the 14 authors who responded with 17 reasons for not publishing, the most common reasons were lack of time (7–41.2%), results not important enough (4–23.5%), publication not an aim (3–17.6%), and lack of resources (3–17.6%). Discussion At least half of the abstracts presented at the annual meeting for the Society of Skeletal Radiology are accepted for publication in a peer-reviewed journal. The majority (59.9%) of these publications were achieved within two years of the conference presentation. The rate at which presentations are published and the journals that accept the abstracts can potentially be used to compare the importance and quality of information at conferences.
PLOS ONE | 2018
Chase Meyer; Aaron Bowers; Cole Wayant; Jake X. Checketts; Jared Scott; Sanjeev Musuvathy; Matt Vassar
Background Clinical practice guidelines contain recommendations for physicians to determine the most appropriate care for patients. These guidelines systematically combine scientific evidence and clinical judgment, culminating in recommendations intended to optimize patient care. The recommendations in CPGs are supported by evidence which varies in quality. We aim to survey the clinical practice guidelines created by the American College of Gastroenterology, report the level of evidence supporting their recommendations, and identify areas where evidence can be improved with additional research. Methods We extracted 1328 recommendations from 39 clinical practice guidelines published by the American College of Gastroenterology. Several of the clinical practice guidelines used the differing classifications of evidence for their recommendations. To standardize our results, we devised a uniform system for evidence. Results A total of 39 clinical practice guidelines were surveyed in our study. Together they account for 1328 recommendations. 693 (52.2%) of the recommendations were based on low evidence, indicating poor evidence or expert opinion. Among individual guidelines, 13/39 (33.3%) had no recommendations based on high evidence. Conclusion Very few recommendations made by the American College of Gastroenterology are supported by high levels of evidence. More than half of all recommendations made by the American College of Gastroenterology are based on low-quality evidence or expert opinion.
International Orthopaedics | 2018
Jared Scott; Jake Xavier Checketts; Jarryd Horn; Craig M. Cooper; Matt Vassar
PurposeAn estimated 85% of research is of limited value or wasted because the wrong research questions are addressed. We sought to identify research gaps using American Academy of Orthopaedic Surgeon (AAOS) clinical practice guidelines Treatment of Osteoarthritis of the Knee and Surgical Management of Osteoarthritis of the Knee. Using these recommendations, we conducted searches of ClinicalTrials.gov to discover the extent to which new and ongoing research addresses areas of deficiency.MethodsFor each recommendation in the AAOS guidelines, we created participants, intervention, comparator, outcomes questions, and search strings using a systematic process. Searches were then conducted of ClinicalTrials.gov to locate relevant studies.ResultsOur searches of ClinicalTrials.gov returned 945 studies for surgical and 1416 for non-surgical management of osteoarthritis. Of the 945 studies returned using our search string for surgical trials, 186 (20%) were relevant to 30 (79%) of the 38 recommendations made within the surgical management guideline. Of the 1416 studies returned using our search for non-surgical trials, 360 (25%) were relevant to 16 (89%) of the 18 recommendations made within the conservative management guideline.ConclusionsThe development of clinical practice guidelines is a unique opportunity to simultaneously redefine day-to-day decision-making and provide a critical analysis of the status of the literature. Upon our search of the literature since the guidelines introduction, we have found that some inconclusive areas have received more attention than others. Our results should guide researchers towards conducting research on the topics most in need and, by doing so, strengthen the clinical practice guideline recommendations.
Gynecologic oncology reports | 2018
Saba Imani; Gretchan Moore; Nathan Nelson; Jared Scott; Matt Vassar
Objective This study aimed to determine the publication rate of oral and poster abstracts presented at the 2010 and 2011 Society of Gynecologic Oncology (SGO) conferences as well as the journals that most commonly published these studies, their 5-year impact factor, the time to publication, and the reasons for nonpublication. Methods Abstracts presented at the 2010–2011 SGO conferences were included in this study. We searched Google, Google Scholar, and PubMed to locate published reports of these abstracts. If an abstracts full-text manuscript could not be located, an author of the conference abstract was contacted via email to inquire whether the research was published. If the research was unpublished, the authors were asked to provide the reason for nonpublication. The time to publication, journal, and journal impact factor were noted for abstracts that reached full-text publication. Results A total of 725 abstracts were identified, of which 386 (53%) reached publication in a peer-reviewed journal. Oral presentations were published at a higher rate than poster presentations. Most (70%) reached publication within 2 years of abstract presentation. Abstracts were published in 89 journals, but most (39%) were published in Gynecologic Oncology. The mean time to publication was 15.7 months, with a mean 5-year impact factor of 4.956. Conclusions A 53% publication rate indicates that the SGO conference selection process favors research likely to be published and, thus, presumably of high quality. The overall publication rate is higher than that reported for many other biomedical conferences.
Pm&r | 2017
Alan B. Tran; Jared Scott; Anna Mazur-Mosiewicz; Matthew Vassar; Julia H. Crawford
Disclosures: Alan B. Tran, MD: I Have No Relevant Financial Relationships To Disclose Objective: The objective of the study was to explore the integrity of the research pipeline in traumatic brain injury research by evaluating the extent to which research gaps (identified from low and very low quality evidence during guideline development) are being addressed by new and ongoing research cataloged in clinical trial registries. Design: Clinical practice guidelines were retrieved from the March 2013 Scottish Intercollegiate Guidelines Network on brain injury rehabilitation in adults. Evidence underpinning recommendations (graded low/very low quality or with a high risk of bias) were extracted and screened. Next, we developed evidence-based research questions using the PICO framework (Patient/Problem/Population, Intervention, Comparator, Outcome). Search terms, based on these PICO questions, were developed in consultation with medical research librarians using a combination of Cochrane systematic reviews, Medline, and Embase. Using these search terms, we searched ClinicalTrials.gov and the World Health Organization’s International Clinical Trial Registry Platform to identify new or ongoing studies in that area. Setting: N/A. Participants: N/A. Interventions: Not applicable. Main Outcome Measures: Frequency and percentage of new and ongoing studies listed in clinical trial registries identified from practice guidelines. Results: Clear deficits were noted across most domains. Of the 48 areas identified, little evidence was found of new studies being informed by guideline development. Conclusions: Our findings suggest a potential inefficiency in resource allocation for new and ongoing studies in traumatic brain injury rehabilitation. Improved connectivity between the guideline development process and study planning may improve the efficiency of the research enterprise and result in more efficient practices. Level of Evidence: Level III
PLOS ONE | 2017
Matthew T. Sims; Byron N. Detweiler; Jared Scott; Benjamin McKinnley Howard; Grant Detten; Matt Vassar
Introduction Recent evidence suggests a lack of standardization of shoulder arthroplasty outcomes. This issue is a limiting factor in systematic reviews. Core outcome set (COS) methodology could address this problem by delineating a minimum set of outcomes for measurement in all shoulder arthroplasty trials. Methods A ClinicalTrials.gov search yielded 114 results. Eligible trials were coded on the following characteristics: study status, study type, arthroplasty type, sample size, measured outcomes, outcome measurement device, specific metric of measurement, method of aggregation, outcome classification, and adverse events. Results Sixty-six trials underwent data abstraction and data synthesis. Following abstraction, 383 shoulder arthroplasty outcomes were organized into 11 outcome domains. The most commonly reported outcomes were shoulder outcome score (n = 58), pain (n = 33), and quality of life (n = 15). The most common measurement devices were the Constant-Murley Shoulder Outcome Score (n = 38) and American Shoulder and Elbow Surgeons Shoulder Score (n = 33). Temporal patterns of outcome use was also found. Conclusion Our study suggests the need for greater standardization of outcomes and instruments. The lack of consistency across trials indicates that developing a core outcome set for shoulder arthroplasty trials would be worthwhile. Such standardization would allow for more effective comparison across studies in systematic reviews, while at the same time consider important outcomes that may be underrepresented otherwise. This review of outcomes provides an evidence-based foundation for the development of a COS for shoulder arthroplasty.
Journal of Arthroplasty | 2017
Aaron Bowers; Jarryd Horn; Jared Scott; Matt Vassar
Transplantation | 2018
Kaleb Fuller; Chase Meyer; Jared Scott; B. Matt Vassar