Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Aloysius J. Humbert is active.

Publication


Featured researches published by Aloysius J. Humbert.


Medical Teacher | 2011

Assessment of clinical reasoning: A Script Concordance test designed for pre-clinical medical students

Aloysius J. Humbert; Mary T. Johnson; Edward J. Miech; Fred Friedberg; Janice A. Grackin; Peggy A. Seidman

Background: The Script Concordance test (SCT) measures clinical reasoning in the context of uncertainty by comparing the responses of examinees and expert clinicians. It uses the level of agreement with a panel of experts to assign credit for the examinees answers. Aim: This study describes the development and validation of a SCT for pre-clinical medical students. Methods: Faculty from two US medical schools developed SCT items in the domains of anatomy, biochemistry, physiology, and histology. Scoring procedures utilized data from a panel of 30 expert physicians. Validation focused on internal reliability and the ability of the SCT to distinguish between different cohorts. Results: The SCT was administered to an aggregate of 411 second-year and 70 fourth-year students from both schools. Internal consistency for the 75 test items was satisfactory (Cronbachs alpha = 0.73). The SCT successfully differentiated second- from fourth-year students and both student groups from the expert panel in a one-way analysis of variance (F2,508 = 120.4; p < 0.0001). Mean scores for students from the two schools were not significantly different (p = 0.20). Conclusion: This SCT successfully differentiated pre-clinical medical students from fourth-year medical students and both cohorts of medical students from expert clinicians across different institutions and geographic areas. The SCT shows promise as an easy-to-administer measure of “problem-solving” performance in competency evaluation even in the beginning years of medical education.


Academic Emergency Medicine | 2011

Assessing Clinical Reasoning Skills in Scenarios of Uncertainty: Convergent Validity for a Script Concordance Test in an Emergency Medicine Clerkship and Residency

Aloysius J. Humbert; Bart R. Besinger; Edward J. Miech

OBJECTIVES The Script Concordance Test (SCT) is a new method of assessing clinical reasoning in the face of uncertainty. An SCT item consists of a short clinical vignette followed by an additional piece of information and asks how this new information affects the learners decision regarding a possible diagnosis, investigational study, or therapy. Scoring is based on the item responses of a panel of experts in the field. This study attempts to provide additional validity evidence in the realm of emergency medicine (EM). METHODS This observational study examined the performance of medical students, EM residents, and expert emergency physicians (EPs) on an SCT in the area of general EM (SCT-EM) at one of the largest medical schools in the United States. The 59-item SCT-EM was developed for a fourth-year required clerkship in EM. The results on the SCT-EM were compared between different levels of clinical experience. Results were also compared to performance on other measures to evaluate convergent validity. RESULTS The SCT-EM was given to 314 fourth-year medical students (MS4), 40 EM residents, and 13 EPs during the study period. Mean differences between the three different groups of test takers was statistically significant (p < 0.0001). The range of scores for the MS4s was 42% to 77% and followed a normal distribution. Among the residents, performance on the SCT-EM and the EM in-training examination were significantly correlated (r = 0.69, p < 0.001); among the MS4s who later matched into EM residency programs, performance on the SCT-EM and United States Medical Licensing Examination (USMLE) Step 2-Clinical Knowledge (Step 2-CK) exam was also significantly correlated (r = 0.56, p < 0.001). CONCLUSIONS The SCT-EM shows promise as an assessment that can be used to measure clinical reasoning skills in the face of uncertainty. Future research will compare performance on the SCT to other measures of clinical reasoning abilities.


Teaching and Learning in Medicine | 2014

Analyzing Script Concordance Test Scoring Methods and Items by Difficulty and Type

Adam B. Wilson; Gary R. Pike; Aloysius J. Humbert

Background: A battery of various psychometric assessments has been conducted on script concordance tests (SCTs) that are purported to measure data interpretation, an essential component of clinical reasoning. Although the breadth of published SCT research is broad, best practice controversies and evidentiary gaps remain. Purposes: In this study, SCT data were used to test the psychometric properties of 6 scoring methods. In addition, this study explored whether SCT items clustered by difficulty and type were able to discriminate between medical training levels. Methods: SCT scores from a problem-solving SCT (SCT-PS; n = 522) and emergency medicine SCT (SCT-EM; n = 1,040) were collected at a large institution of medicine. Item analyses were performed to optimize each dataset. Items were categorized into difficulty levels and organized into types. Correlational analyses, one-way multivariate analysis of variance (MANOVA), repeated measures analysis of variance (ANOVA), and one-way ANOVA were conducted to explore study aims. Results: All 6 scoring methods differentiated between training levels. Longitudinal analysis of SCT-PS data reported that MS4s significantly (p < .001) outperformed their scores as MS2s in all difficulty categories. Cross-sectional analysis of SCT-EM data reported significant differences (p < .001) between experienced EM physicians, EM residents, and MS4s at each level of difficulty. Items categorized by type were also able to detect training level disparities. Conclusions: Of the 6 scoring methods, 5-point scoring solutions generated more reliable measures of data interpretation than 3-point scoring methods. Data interpretation abilities were a function of experience at every level of item difficulty. Items categorized by type exhibited discriminatory power providing modest evidence toward the construct validity of SCTs.


Academic Medicine | 2014

Measuring gains in the clinical reasoning of medical students: longitudinal results from a school-wide script concordance test.

Aloysius J. Humbert; Edward J. Miech

Purpose Medical students develop clinical reasoning skills throughout their training. The Script Concordance Test (SCT) is a standardized instrument that assesses clinical reasoning; test takers with more clinical experience consistently outperform those with less experience. SCT studies to date have been cross-sectional, with no studies examining same-student longitudinal performance gains. Method This four-year observational study took place between 2008 and 2011 at the Indiana University School of Medicine. Students in two different cohorts took the same SCT as second-year medical students and then again as fourth-year medical students. The authors matched and analyzed same-student data from the two SCT administrations for the classes of 2011 and 2012. They used descriptive statistics, correlation coefficients, and paired t tests. Results Matched data were available for 260 students in the class of 2011 (of 303, 86%) and 264 students in the class of 2012 (of 289, 91%). The mean same-student gain for the class of 2011 was 8.6 (t[259] = 15.9; P < .0001) and for the class of 2012 was 11.3 (t[263] = 21.4; P < .0001). Each cohort gained more than one standard deviation. Conclusions Medical students made statistically significant gains in their performance on an SCT over a two-year period. These findings demonstrate same-student gains in clinical reasoning over time as measured by the SCT and suggest that the SCT as a standardized instrument can help to evaluate growth in clinical reasoning skills.


Journal of Graduate Medical Education | 2012

Medical students' perception of residents as teachers: comparing effectiveness of residents and faculty during simulation debriefings.

Dylan D. Cooper; Adam B. Wilson; Gretchen Huffman; Aloysius J. Humbert

BACKGROUND Simulation can enhance undergraduate medical education. However, the number of faculty facilitators needed for observation and debriefing can limit its use with medical students. The goal of this study was to compare the effectiveness of emergency medicine (EM) residents with that of EM faculty in facilitating postcase debriefings. METHODS The EM clerkship at Indiana University School of Medicine requires medical students to complete one 2-hour mannequin-based simulation session. Groups of 5 to 6 students participated in 3 different simulation cases immediately followed by debriefings. Debriefings were led by either an EM faculty volunteer or EM resident volunteer. The Debriefing Assessment for Simulation in Healthcare (DASH) participant form was completed by students to evaluate each individual providing the debriefing. RESULTS In total, 273 DASH forms were completed (132 EM faculty evaluations and 141 EM resident evaluations) for 7 faculty members and 9 residents providing the debriefing sessions. The mean total faculty DASH score was 32.42 and mean total resident DASH score was 32.09 out of a possible 35. There were no statistically significant differences between faculty and resident scores overall (P  =  .36) or by case type (P trauma  =  .11, P medical  =  .19, P pediatrics  =  .48). CONCLUSIONS EM residents were perceived to be as effective as EM faculty in debriefing medical students in a mannequin-based simulation experience. The use of residents to observe and debrief students may allow additional simulations to be incorporated into undergraduate curricula and provide valuable teaching opportunities for residents.


Medical Education Online | 2016

Examining rater and occasion influences in observational assessments obtained from within the clinical environment

Clarence D. Kreiter; Adam B. Wilson; Aloysius J. Humbert; Patricia Ann Wade

Background When ratings of student performance within the clerkship consist of a variable number of ratings per clinical teacher (rater), an important measurement question arises regarding how to combine such ratings to accurately summarize performance. As previous G studies have not estimated the independent influence of occasion and rater facets in observational ratings within the clinic, this study was designed to provide estimates of these two sources of error. Method During 2 years of an emergency medicine clerkship at a large midwestern university, 592 students were evaluated an average of 15.9 times. Ratings were performed at the end of clinical shifts, and students often received multiple ratings from the same rater. A completely nested G study model (occasion: rater: person) was used to analyze sampled rating data. Results The variance component (VC) related to occasion was small relative to the VC associated with rater. The D study clearly demonstrates that having a preceptor rate a student on multiple occasions does not substantially enhance the reliability of a clerkship performance summary score. Conclusions Although further research is needed, it is clear that case-specific factors do not explain the low correlation between ratings and that having one or two raters repeatedly rate a student on different occasions/cases is unlikely to yield a reliable mean score. This research suggests that it may be more efficient to have a preceptor rate a student just once. However, when multiple ratings from a single preceptor are available for a student, it is recommended that a mean of the preceptors ratings be used to calculate the students overall mean performance score.


Western Journal of Emergency Medicine | 2015

Introducing medical students into the emergency department: The impact upon patient satisfaction

Christopher Kiefer; Joseph Turner; Shelley M. Layman; Stephen M. Davis; Bart R. Besinger; Aloysius J. Humbert

Introduction Performance on patient satisfaction surveys is becoming increasingly important for practicing emergency physicians and the introduction of learners into a new clinical environment may impact such scores. This study aimed to quantify the impact of introducing fourth-year medical students on patient satisfaction in two university-affiliated community emergency departments (EDs). Methods Two community-based EDs in the Indiana University Health (IUH) system began hosting medical students in March 2011 and October 2013, respectively. We analyzed responses from patient satisfaction surveys at each site for seven months before and after the introduction of students. Two components of the survey, “Would you recommend this ED to your friends and family?” and “How would you rate this facility overall?” were selected for analysis, as they represent the primary questions reviewed by the Center for Medicare Services (CMS) as part of value-based purchasing. We evaluated the percentage of positive responses for adult, pediatric, and all patients combined. Results Analysis did not reveal a statistically significant difference in the percentage of positive response for the “would you recommend” question at both clinical sites with regards to the adult and pediatric subgroups, as well as the all-patient group. At one of the sites, there was significant improvement in the percentage of positive response to the “overall rating” question following the introduction of medical students when all patients were analyzed (60.3% to 68.2%, p=0.038). However, there was no statistically significant difference in the “overall rating” when the pediatric or adult subgroups were analyzed at this site and no significant difference was observed in any group at the second site. Conclusion The introduction of medical students in two community-based EDs is not associated with a statistically significant difference in overall patient satisfaction, but was associated with a significant positive effect on the overall rating of the ED at one of the two clinical sites studied. Further study is needed to evaluate the effect of medical student learners upon patient satisfaction in settings outside of a single health system.


Western Journal of Emergency Medicine | 2018

Effect of an Educational Intervention on Medical Student Scripting and Patient Satisfaction: A Randomized Trial

Katie Pettit; Joseph Turner; Katherine A. Pollard; Bryce B. Buente; Aloysius J. Humbert; Anthony J. Perkins; Cherri Hobgood; Jeffrey A. Kline

Introduction Effective communication between clinicians and patients has been shown to improve patient outcomes, reduce malpractice liability, and is now being tied to reimbursement. Use of a communication strategy known as “scripting” has been suggested to improve patient satisfaction in multiple hospital settings, but the frequency with which medical students use this strategy and whether this affects patient perception of medical student care is unknown. Our objective was to measure the use of targeted communication skills after an educational intervention as well as to further clarify the relationship between communication element usage and patient satisfaction. Methods Medical students were block randomized into the control or intervention group. Those in the intervention group received refresher training in scripted communication. Those in the control group received no instruction or other intervention related to communication. Use of six explicit communication behaviors were recorded by trained study observers: 1) acknowledging the patient by name, 2) introducing themselves as medical students, 3) explaining their role in the patient’s care, 4) explaining the care plan, 5) providing an estimated duration of time to be spent in the emergency department (ED), and 6) notifying the patient that another provider would also be seeing them. Patients then completed a survey regarding their satisfaction with the medical student encounter. Results We observed 474 medical student-patient encounters in the ED (231 in the control group and 243 in the intervention group). We were unable to detect a statistically significant difference in communication element use between the intervention and control groups. One of the communication elements, explaining steps in the care plan, was positively associated with patient perception of the medical student’s overall communication skills. Otherwise, there was no statistically significant association between element use and patient satisfaction. Conclusion We were unable to demonstrate any improvement in student use of communication elements or in patient satisfaction after refresher training in scripted communication. Furthermore, there was little variation in patient satisfaction based on the use of scripted communication elements. Effective communication with patients in the ED is complicated and requires further investigation on how to provide this skill set.


MedEdPORTAL | 2018

Preparing Emergency Medicine Residents as Teachers: Clinical Teaching Scenarios

Aloysius J. Humbert; Katie Pettit; Joseph Turner; Josh Mugele; Kevin Rodgers

Introduction Preparing residents for supervision of medical students in the clinical setting is important to provide high-quality education for the next generation of physicians and is mandated by the Liaison Committee on Medical Education as well as the Accreditation Council for Graduate Medical Education. This requirement is met in variable ways depending on the specialty, school, and setting where teaching takes place. This educational intervention was designed to allow residents to practice techniques useful while supervising medical students in simulated encounters in the emergency department and increase their comfort level with providing feedback to students. Methods The four role-playing scenarios described here were developed for second-year residents in emergency medicine at the Indiana University School of Medicine. Residents participated in the scenarios prior to serving as a supervisor for fourth-year medical students rotating on the emergency medicine clerkship. For each scenario, a faculty member observed the simulated interaction between the resident and the simulated student. The residents were surveyed before and after participating in the scenarios to determine the effectiveness of the instruction. Results Residents reported that they were more comfortable supervising students, evaluating their performance, and giving feedback after participating in the scenarios. Discussion Participation in these clinical teaching scenarios was effective at making residents more comfortable with their role as supervisors of fourth-year students taking an emergency medicine clerkship. These scenarios may be useful as part of a resident-as-teacher curriculum for emergency medicine residents.


Academic Emergency Medicine | 2012

Assessing Diagnostic Reasoning: A Consensus Statement Summarizing Theory, Practice, and Future Needs

Jonathan S. Ilgen; Aloysius J. Humbert; Gloria J. Kuhn; Matthew Hansen; Geoffrey R. Norman; Kevin W. Eva; Bernard Charlin; Jonathan Sherbino

Collaboration


Dive into the Aloysius J. Humbert's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge