Adam B. Wilson
Rush University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Adam B. Wilson.
Medical Education | 2009
Adam B. Wilson; Christopher Ross; Michael Petty; James M. Williams; Laura E. Thorp
Objectives One of the goals of medical education is to bridge the gap between basic science and clinical practice. Students acquire basic science knowledge during their pre‐clinical years, yet have limited opportunities to apply this knowledge clinically. This hands‐on laboratory exercise was designed to facilitate a review of anatomy in the context of select clinical procedures, highlighting the application of anatomical concepts in clinical practice.
Teaching and Learning in Medicine | 2011
Adam B. Wilson; Michael Petty; James M. Williams; Laura E. Thorp
Background: Reduction in contact hours has led to the use of more efficient teaching approaches in medical education, yet the efficacy of such approaches is often not fully investigated. Purpose: This work provides a detailed analysis of alternating group dissections with peer-teaching in Medical Anatomy (MA). Methods: MA I and II percentages of the alternating (ALT) and nonalternating (NALT) groups were compared, scores of ALT subgroups (A and B) were compared, and subgroup performance on practical exam questions was compared. Results: MA I and MA II percentages indicated no significant difference in median scores (pMAI = 0.581, pMAII = 0.223) between ALT and NALT. Subgroup analysis and assessment of question types showed that student performance and ability to identify a structure were not dependent on dissection group assignment. Conclusion: Alternating dissections offered students more unscheduled time for independent learning activities, such as studying or shadowing preceptors, and reduced student-to-cadaver and student-to-faculty ratios by 50%. Alternating dissections with peer teaching were not detrimental to student performance.
Teaching and Learning in Medicine | 2014
Adam B. Wilson; Gary R. Pike; Aloysius J. Humbert
Background: A battery of various psychometric assessments has been conducted on script concordance tests (SCTs) that are purported to measure data interpretation, an essential component of clinical reasoning. Although the breadth of published SCT research is broad, best practice controversies and evidentiary gaps remain. Purposes: In this study, SCT data were used to test the psychometric properties of 6 scoring methods. In addition, this study explored whether SCT items clustered by difficulty and type were able to discriminate between medical training levels. Methods: SCT scores from a problem-solving SCT (SCT-PS; n = 522) and emergency medicine SCT (SCT-EM; n = 1,040) were collected at a large institution of medicine. Item analyses were performed to optimize each dataset. Items were categorized into difficulty levels and organized into types. Correlational analyses, one-way multivariate analysis of variance (MANOVA), repeated measures analysis of variance (ANOVA), and one-way ANOVA were conducted to explore study aims. Results: All 6 scoring methods differentiated between training levels. Longitudinal analysis of SCT-PS data reported that MS4s significantly (p < .001) outperformed their scores as MS2s in all difficulty categories. Cross-sectional analysis of SCT-EM data reported significant differences (p < .001) between experienced EM physicians, EM residents, and MS4s at each level of difficulty. Items categorized by type were also able to detect training level disparities. Conclusions: Of the 6 scoring methods, 5-point scoring solutions generated more reliable measures of data interpretation than 3-point scoring methods. Data interpretation abilities were a function of experience at every level of item difficulty. Items categorized by type exhibited discriminatory power providing modest evidence toward the construct validity of SCTs.
Clinical Anatomy | 2018
Adam B. Wilson; Corinne H. Miller; Barbie A. Klein; Melissa A. Taylor; Michael Goodwin; Eve K. Boyle; Kirsten Brown; Chantal Hoppe; Michelle D. Lazarus
The debate regarding anatomy laboratory teaching approaches is ongoing and controversial. To date, the literature has yielded only speculative conclusions because of general methodological weaknesses and a lack of summative empirical evidence. Through a meta‐analysis, this study compared the effectiveness of instructional laboratory approaches used in anatomy education to objectively and more conclusively synthesize the existing literature. Studies published between January 1965 and December 2015 were searched through five databases. Titles and abstracts of the retrieved records were screened using eligibility criteria to determine their appropriateness for study inclusion. Only numerical data were extracted for analysis. A summary effect size was estimated to determine the effects of laboratory pedagogies on learner performance and perceptions data were compiled to provide additional context. Of the 3,035 records screened, 327 underwent full‐text review. Twenty‐seven studies, comprising a total of 7,731 participants, were included in the analysis. The meta‐analysis detected no effect (standardized mean difference = −0.03; 95% CI = −0.16 to 0.10; P = 0.62) on learner performance. Additionally, a moderator analysis detected no effects (P ≥ 0.16) for study design, learner population, intervention length, or specimen type. Across studies, student performance on knowledge examinations was equivalent regardless of being exposed to either dissection or another laboratory instructional strategy. This was true of every comparison investigated (i.e., dissection vs. prosection, dissection vs. digital media, dissection vs. models/modeling, and dissection vs. hybrid). In the context of short‐term knowledge gains alone, dissection is no better, and no worse, than alternative instructional modalities. Clin. Anat. 31:122–133, 2018.
Medical Education | 2016
Adam B. Wilson; Melissa A. Taylor; Barbie A. Klein; Megan K. Sugrue; Elizabeth C. Whipple; James J. Brokaw
Over nearly two decades, a wealth of literature describing the various capabilities, uses and adaptations of virtual microscopy (VM) has been published. Many studies have investigated the effects on and benefits to student learning of VM compared with optical microscopy (OM).
Journal of Graduate Medical Education | 2012
Dylan D. Cooper; Adam B. Wilson; Gretchen Huffman; Aloysius J. Humbert
BACKGROUND Simulation can enhance undergraduate medical education. However, the number of faculty facilitators needed for observation and debriefing can limit its use with medical students. The goal of this study was to compare the effectiveness of emergency medicine (EM) residents with that of EM faculty in facilitating postcase debriefings. METHODS The EM clerkship at Indiana University School of Medicine requires medical students to complete one 2-hour mannequin-based simulation session. Groups of 5 to 6 students participated in 3 different simulation cases immediately followed by debriefings. Debriefings were led by either an EM faculty volunteer or EM resident volunteer. The Debriefing Assessment for Simulation in Healthcare (DASH) participant form was completed by students to evaluate each individual providing the debriefing. RESULTS In total, 273 DASH forms were completed (132 EM faculty evaluations and 141 EM resident evaluations) for 7 faculty members and 9 residents providing the debriefing sessions. The mean total faculty DASH score was 32.42 and mean total resident DASH score was 32.09 out of a possible 35. There were no statistically significant differences between faculty and resident scores overall (P = .36) or by case type (P trauma = .11, P medical = .19, P pediatrics = .48). CONCLUSIONS EM residents were perceived to be as effective as EM faculty in debriefing medical students in a mannequin-based simulation experience. The use of residents to observe and debrief students may allow additional simulations to be incorporated into undergraduate curricula and provide valuable teaching opportunities for residents.
Journal of Surgical Education | 2015
Adam B. Wilson; Laura Torbeck; Gary L. Dunnington
OBJECTIVE The release of general surgery residency program rankings by Doximity and U.S. News & World Report accentuates the need to define and establish measurable standards of program quality. This study evaluated the extent to which program rankings based solely on peer nominations correlated with familiar program outcomes measures. DESIGN Publicly available data were collected for all 254 general surgery residency programs. To generate a rudimentary outcomes-based program ranking, surgery programs were rank-ordered according to an average percentile rank that was calculated using board pass rates and the prevalence of alumni publications. A Kendall τ-b rank correlation computed the linear association between program rankings based on reputation alone and those derived from outcomes measures to validate whether reputation was a reasonable surrogate for globally judging program quality. RESULTS For the 218 programs with complete data eligible for analysis, the mean board pass rate was 72% with a standard deviation of 14%. A total of 60 programs were placed in the 75th percentile or above for the number of publications authored by program alumni. The correlational analysis reported a significant correlation of 0.428, indicating only a moderate association between programs ranked by outcomes measures and those ranked according to reputation. Seventeen programs that were ranked in the top 30 according to reputation were also ranked in the top 30 based on outcomes measures. CONCLUSIONS This study suggests that reputation alone does not fully capture a representative snapshot of a programs quality. Rather, the use of multiple quantifiable indicators and attributes unique to programs ought to be given more consideration when assigning ranks to denote program quality. It is advised that the interpretation and subsequent use of program rankings be met with caution until further studies can rigorously demonstrate best practices for awarding program standings.
Medical Education Online | 2016
Clarence D. Kreiter; Adam B. Wilson; Aloysius J. Humbert; Patricia Ann Wade
Background When ratings of student performance within the clerkship consist of a variable number of ratings per clinical teacher (rater), an important measurement question arises regarding how to combine such ratings to accurately summarize performance. As previous G studies have not estimated the independent influence of occasion and rater facets in observational ratings within the clinic, this study was designed to provide estimates of these two sources of error. Method During 2 years of an emergency medicine clerkship at a large midwestern university, 592 students were evaluated an average of 15.9 times. Ratings were performed at the end of clinical shifts, and students often received multiple ratings from the same rater. A completely nested G study model (occasion: rater: person) was used to analyze sampled rating data. Results The variance component (VC) related to occasion was small relative to the VC associated with rater. The D study clearly demonstrates that having a preceptor rate a student on multiple occasions does not substantially enhance the reliability of a clerkship performance summary score. Conclusions Although further research is needed, it is clear that case-specific factors do not explain the low correlation between ratings and that having one or two raters repeatedly rate a student on different occasions/cases is unlikely to yield a reliable mean score. This research suggests that it may be more efficient to have a preceptor rate a student just once. However, when multiple ratings from a single preceptor are available for a student, it is recommended that a mean of the preceptors ratings be used to calculate the students overall mean performance score.
Anatomical Sciences Education | 2018
Adam B. Wilson; Kirsten Brown; Jonathan Misch; Corinne H. Miller; Barbie A. Klein; Melissa A. Taylor; Michael Goodwin; Eve K. Boyle; Chantal Hoppe; Michelle D. Lazarus
While prior meta‐analyses in anatomy education have explored the effects of laboratory pedagogies and histology media on learner performance, the effects of student‐centered learning (SCL) and computer‐aided instruction (CAI) have not been broadly evaluated. This research sought to answer the question, “How effective are student‐centered pedagogies and CAI at increasing student knowledge gains in anatomy compared to traditional didactic approaches?” Relevant studies published within the past 51 years were searched using five databases. Predetermined eligibility criteria were applied to the screening of titles and abstracts to discern their appropriateness for study inclusion. A summary effect size was estimated to determine the effects of SCL and CAI on anatomy performance outcomes. A moderator analysis of study features was also performed. Of the 3,035 records screened, 327 underwent full‐text review. Seven studies, which comprised 1,564 participants, were included in the SCL analysis. An additional 19 studies analyzed the effects of CAI in the context of 2,570 participants. Upon comparing SCL to traditional instruction, a small positive effect on learner performance was detected (standardized mean difference (SMD = 0.24; [CI = 0.07, 0.42]; P = 0.006). Likewise, students with CAI exposure moderately outscored those with limited or no access to CAI (SMD = 0.59; [CI = 0.20, 0.98]; P = 0.003). Further analysis of CAI studies identified effects (P ≤ 0.001) for learner population, publication period, interventional approach, and intervention frequency. Overall, learners exposed to SCL and supplemental CAI outperformed their more classically‐trained peers as evidenced by increases in short‐term knowledge gains. Anat Sci Educ.
Anatomical Sciences Education | 2018
Adam B. Wilson; J. Bradley Barger; Patricia Perez; William S. Brooks
Continuing education (CE) is an essential element in the life‐long learning of health care providers and educators. Despite the importance of the anatomical sciences in the training and practice of clinicians, no studies have examined the need/state of anatomy‐related CE nationally. This study assessed the current landscape of CE in the anatomical sciences to contextualize preferences for CE, identify factors that influence the perceived need for CE, and examine the association between supply and demand. Surveys were distributed to educators in the anatomical sciences, practicing physical therapists (PTs), and anatomy training programs across the United States. Twenty‐five percent (9 of 36) of training programs surveyed offered CE, certificates, or summer series programs related to anatomy. The majority of PTs (92%) and anatomy educators (81%) felt they had a potential or actual need for anatomy related CE with the most popular formats being online videos/learning modules and intensive, hands‐on workshops. The most commonly perceived barriers to participating in CE for both groups were program location, cost, and duration, while educators also perceived time of year as a significant factor. Logistic regression analyses revealed that no investigated factor influenced the need or desire for PTs to engage in anatomy related CE (P ≤ 0.124), while teaching experience and the highest level of learner taught significantly influenced the perceived need among anatomy educators (P < 0.001). Overall, quantitative and qualitative analyses revealed a robust need for CE that strategically integrates anatomy with areas of clinical practice and education. Anat Sci Educ 11: 225–235.