Matt Vassar
Oklahoma State University–Stillwater
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Matt Vassar.
Journal of Personality Assessment | 2008
Matt Vassar; James W. Crosby
Loneliness is a psychological construct that has been reported in a variety of populations and associated with a number of other negative psychological problems. This study was an examination of coefficient alpha of a prominent measure of loneliness: the University of California, Los Angeles (UCLA) Loneliness Scale (Russell, Peplau, & Cutrona, 1980; Russell, 1996). We utilized reliability generalization to provide an aggregate estimate of the reliability of the scale over time and in a variety of populations as well as to assess and identify sampling and demographic characteristics associated with variability in coefficient alpha. Of the 213 studies examined, 80 had reported alpha estimates, and we used them in this analysis. We discuss conditions associated with variability in coefficient alpha along with pertinent implications for practice and future research.
Journal of Interpersonal Violence | 2009
Matt Vassar; William Hale
Empirical research on anger and hostility has pervaded the academic literature for more than 50 years. Accurate measurement of anger/hostility and subsequent interpretation of results requires that the instruments yield strong psychometric properties. For consistent measurement, reliability estimates must be calculated with each administration, because changes in sample characteristics may alter the scales ability to generate reliable scores. Therefore, the present study was designed to address reliability reporting practices for a widely used anger assessment, the Buss Durkee Hostility Inventory (BDHI). Of the 250 published articles reviewed, 11.2% calculated and presented reliability estimates for the data at hand, 6.8% cited estimates from a previous study, and 77.1% made no mention of score reliability. Mean alpha estimates of scores for BDHI subscales generally fell below acceptable standards. Additionally, no detectable pattern was found between reporting practices and publication year or journal prestige. Areas for future research are also discussed.
PLOS ONE | 2017
Benjamin M. Howard; Jared Scott; Mark Blubaugh; Brie Roepke; Caleb Scheckel; Matt Vassar
Background Selective outcome reporting is a significant methodological concern. Comparisons between the outcomes reported in clinical trial registrations and those later published allow investigators to understand the extent of selection bias among trialists. We examined the possibility of selective outcome reporting in randomized controlled trials (RCTs) published in neurology journals. Methods We searched PubMed for randomized controlled trials from Jan 1, 2010 –Dec 31, 2015 published in the top 3 impact factor neurology journals. These articles were screened according to specific inclusion criteria. Each author individually extracted data from trials following a standardized protocol. A second author verified each extracted element and discrepancies were resolved. Consistency between registered and published outcomes was evaluated and correlations between discrepancies and funding, journal, and temporal trends were examined. Results 180 trials were included for analysis. 10 (6%) primary outcomes were demoted, 38 (21%) primary outcomes were omitted from the publication, and 61 (34%) unregistered primary outcomes were added to the published report. There were 18 (10%) cases of secondary outcomes being upgraded to primary outcomes in the publication, and there were 53 (29%) changes in timing of assessment. Of 82 (46%) major discrepancies with reported p-values, 54 (66.0%) favored publication of statistically significant results. Conclusion Across trials, we found 180 major discrepancies. 66% of major discrepancies with a reported p-value (n = 82) favored statistically significant results. These results suggest a need within neurology to provide more consistent and timely registration of outcomes.
Clinical obesity | 2017
J. Rankin; A. Ross; J. Baker; M. O'Brien; C. Scheckel; Matt Vassar
Selective outcome reporting is a form of bias resulting from discrepancies between outcomes presented in a trials registration and the published report. We investigate this selective bias in obesity clinical trials. A PubMed search was conducted to identify randomized controlled trials (RCTs) published in four obesity journals from 2013 to 2015. Primary, secondary and tertiary outcomes were recorded for each trial and compared to pre‐specified outcomes in each trials registration. Of the 392 identified articles, 142 were included in the final analysis; 22 (15%) RCTs demonstrated major outcome discrepancies between registration and publication: No primary outcomes were demoted to a secondary or tertiary outcome; 14 (36.84%) primary outcomes were omitted; 14 (36.84%) primary outcomes were added: 5 (13.16%) secondary outcomes were upgraded to primary outcomes; and timing of assessment for a primary outcome changed 5 (13.16%) times. Out of the 63 prospectively registered studies, 53 had no discrepancies. A total of 76 of the studies (29.80%) were unregistered or did not have an associated registration number. Our results suggest that selective outcome reporting may be a concern in obesity clinical trials. As selective outcome reporting may distort clinical findings and limit outcomes in systematic reviews, we encourage trialists and journal editors to work towards solutions to mitigate this issue.
Gastroenterology | 2018
Chase Meyer; Matt Vassar
2 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 Dear Editors: The trial by Pinto-Sanchez et al published in a recent issue of Gastroenterology reports a relationship between probiotic Bifidobacterium longum NCC3001 and a decrease in depression scores in patients with irritable bowel syndrome (IBS). IBS is considered a gut–brain disorder and, thus, many patients with IBS experience depression and anxiety symptoms. Therefore, the results of this trial could be far reaching for people with IBS and lead to an increased use of probiotics with Bifidobacterium longum NCC3001. Using the results from Pinto-Sanchez et al, we calculated the fragility index (FI), a measure of robustness of trial results, to further evaluate the strength of their findings. This method involves changing the status of patients (in the group with the smallest number of events) without an event to an event until the P-value exceeds .05. Low fragility values indicate fragile trial results. Our analysis resulted in a FI of 0 for the intention-to-treat analysis and an FI of 2 for the per protocol analysis. The results of the intention-totreat analysis indicate that the outcome is very fragile, because statistical significance is nullified when applying the FI, which is based on Fisher’s exact test. Results from the FI of the per protocol analysis suggest that if 2 patients in the control group had a reduction of 2 points in the Hospital Anxiety and Depression scale, the outcome would no longer be statistically significant.
BMJ Evidence-Based Medicine | 2018
Matt Vassar; Michael Bibens; Cole Wayant
The Evidence-Based Medicine Manifesto1 outlines steps to develop more trustworthy evidence. These steps include reducing conflicts of interest and producing better, more usable clinical practice guidelines. Here, we argue that self-disclosure of industry payments by guideline panellists is inadequate and often inaccurate. The international community should come together to require open and transparent reporting of all industry payments made to physicians by drug and device companies. Such an initiative would accomplish many things, including better policies and verification of conflicts of interests for guideline panel members. In the USA, the Open Payments Program—established as part of the Affordable Care Act—catalogues payments made to physicians by pharmaceutical and device companies and classifies payments by type. General payments include consulting fees, honoraria, gifts, food and beverage, and travel; research payments include funds received for basic and applied research or product development; associated …
Rheumatology International | 2018
Samuel S. Jellison; Michael Bibens; Jake X. Checketts; Matt Vassar
To evaluate global public interest in osteoarthritis by assessing changes in Internet search popularity of the disease over a 10-year period. Google Trends was used to obtain search popularity scores for the word “osteoarthritis” (OA) between January 2004 and June 2018. We also analyzed changes in search volume relative to changes in search patterns for all health topics. Search interest in OA was high relative to all other health searches over the given timeframe. Overall, searches for OA steadily decreased between May 2004 and December 2012 and then steadily rose from January 2013 to April 2018. We also found consistent annual fluctuations over the pre-specified time range, with biannual peaks typically correlating with national and global awareness days. Biannual dips occurred with changes in seasonal patterns. Google searches for OA have steadily increased in recent years. Awareness initiatives, like World Arthritis Day, may be a reason for the public to search for information on OA. It may be helpful for physicians to search the Internet themselves for websites that provide accurate and high-quality information to recommend to their patients.
PeerJ | 2018
Chase Meyer; Kaleb Fuller; Jared Scott; Matt Vassar
Background Publication bias is the tendency of investigators, reviewers, and editors to submit or accept manuscripts for publication based on their direction or strength of findings. In this study, we investigated if publication bias was present in gastroenterological research by evaluating abstracts at Americas Hepato-Pancreato-Biliary Congresses from 2011 to 2013. Methods We searched Google, Google Scholar, and PubMed to locate the published reports of research described in these abstracts. If a publication was not found, a second investigator searched to verify nonpublication. If abstract publication status remained undetermined, authors were contacted regarding reasons for nonpublication. For articles reaching publication, the P value, study design, time to publication, citation count, and journals in which the published report appeared were recorded. Results Our study found that of 569 abstracts presented, 297 (52.2%) reported a P value. Of these, 254 (85.5%) contained P values supporting statistical significance. The abstracts reporting a statistically significant outcome were twice as likely to reach publication than abstracts with no significant findings (OR 2.10, 95% CI [1.06–4.14]). Overall, 243 (42.7%) abstracts reached publication. The mean time to publication was 14 months and a median time of nine months. Conclusion In conclusion, we found evidence for publication bias in gastroenterological research. Abstracts with significant P values had a higher probability of reaching publication. More than half of abstracts presented from 2011 to 2013 failed to reach publication. Readers should take these findings into consideration when reviewing medical literature.
PLOS ONE | 2018
Chase Meyer; Aaron Bowers; Cole Wayant; Jake X. Checketts; Jared Scott; Sanjeev Musuvathy; Matt Vassar
Background Clinical practice guidelines contain recommendations for physicians to determine the most appropriate care for patients. These guidelines systematically combine scientific evidence and clinical judgment, culminating in recommendations intended to optimize patient care. The recommendations in CPGs are supported by evidence which varies in quality. We aim to survey the clinical practice guidelines created by the American College of Gastroenterology, report the level of evidence supporting their recommendations, and identify areas where evidence can be improved with additional research. Methods We extracted 1328 recommendations from 39 clinical practice guidelines published by the American College of Gastroenterology. Several of the clinical practice guidelines used the differing classifications of evidence for their recommendations. To standardize our results, we devised a uniform system for evidence. Results A total of 39 clinical practice guidelines were surveyed in our study. Together they account for 1328 recommendations. 693 (52.2%) of the recommendations were based on low evidence, indicating poor evidence or expert opinion. Among individual guidelines, 13/39 (33.3%) had no recommendations based on high evidence. Conclusion Very few recommendations made by the American College of Gastroenterology are supported by high levels of evidence. More than half of all recommendations made by the American College of Gastroenterology are based on low-quality evidence or expert opinion.
Journal of Clinical Gastroenterology and Hepatology | 2018
Chris Chapman; Benjamin M. Howard; Cole Wayant; Matt Vassar
Objective: In this study, we use the Fragility Index and Cochrane’s Risk of Bias Tool 2.0 to analyze the randomized controlled trials underpinning the American Gastroenterological Association’s clinical practice guideline on bowel preparation before colonoscopy. Design: All citations within the guideline were screened for specific criteria. We extracted bowel preparation outcome data from the included studies and used an online calculator to determine the FI and Fragility Quotient (FQ) (fragility index relative to study sample size). Risk of bias assessments was made using the Cochrane Risk of Bias Tool 2.0. Results: The median FI for the 30 included trials was 7.5 events (IQR 3-11.75). The median FQ was 3.5 per 100 patients. The Risk of Bias Assessments resulted in the following classifications: 12: Low Risk, 2: Some Concerns, 16: High Risk. Conclusion: RCTs in ACG Bowel Preparation guidelines were found to contain moderate fragility and relatively high risk of bias. Reporting fragility in RCTs will help appraisers of guidelines by indicating the robustness of the results. In this way, guideline writers will be in a better position to make recommendations. Likewise, preemptive evaluation of risk of bias will help identify key weaknesses underlying RCTs and add to their credibility in formulating recommendations.