Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Aaron W. Calhoun is active.

Publication


Featured researches published by Aaron W. Calhoun.


Simulation in healthcare : journal of the Society for Simulation in Healthcare | 2009

Assessment of Communication Skills and Self-Appraisal in the Simulated Environment: Feasibility of Multirater Feedback with Gap Analysis

Aaron W. Calhoun; Elizabeth A. Rider; Elaine C. Meyer; Giulia Lamiani; Robert D. Truog

Introduction: Multirater assessment is a powerful means of measuring communication skills. The use of gap analysis to assess self-appraisal is a strength of this technique. On the basis of Kalamazoo Consensus Statement framework and 360-degree assessment models, we developed a multirater instrument with gap analysis, with the goals of examining both communication skills and situational self-appraisal, and assessing the feasibility of the combined approach. Methods: The multirater communication skills instrument was used to assess Pediatric and Neonatal Intensive Care fellows after participation in seven simulated family meetings. Instrument reliability was determined using Cronbach’s Alpha and Factorial Analysis. Correlations between rater groups were examined with Spearman’s Rank Coefficient. Gap analyses and rater perceptions of the instruments were analyzed using descriptive statistics. Results: Seven pediatric intensive care unit and neonatal intensive care fellows were each assessed by 11 to 18 raters (108 total assessments). Correlations were identified between disciplinary groups within each encounter. Among the 7 fellows, 30 communication strengths or areas needing improvement and 24 significant gaps were identified, indicating self under-appraisals, 9 (38%) of which overlapped. The instrument was logistically feasible and well received. Conclusions: Our multirater communication skills instrument with gap analysis proved useful in identifying areas of strength and areas needing improvement, and in highlighting areas of self over- and under-appraisal that require focused feedback. The use of multirater assessment with gap analysis, in a simulated and “safe” environment, may assist in the delivery of feedback to trainees.


Simulation in healthcare : journal of the Society for Simulation in Healthcare | 2011

Integrated In-situ Simulation Using Redirected Faculty Educational Time to Minimize Costs: A Feasibility Study

Aaron W. Calhoun; Megan C. Boone; Eleanor B. Peterson; Kimberly A. Boland; Vicki L. Montgomery

Introduction: Simulation is an effective teaching tool, but many hospitals do not possess the space or finances to support traditional simulation centers. Our objective is to describe the feasibility of an in situ simulation program model that uses minimal permanent space and “redirected” cost-neutral faculty educational time to address these issues. Methods: Two pediatric simulators and audiovisual equipment were purchased. Course faculty were derived from a group of physicians and nurses with a percentage work assignment apportioned to education. A portion of this was subsequently redirected toward simulation. After 2 years of operation, faculty were surveyed regarding time devoted to the program. Program growth and quality statistics were examined descriptively. Results: The program supported 786 learner encounters in 166 sessions over 2 years. Simulation hours per month increased over sixfold during that period (P < 0.001). Program initiation cost was


The Permanente Journal | 2014

Using Simulation to Address Hierarchy-Related Errors in Medical Practice

Aaron W. Calhoun; Megan C. Boone; Melissa Porter; Karen H. Miller

128920.89, with subsequent yearly costs of


Patient Education and Counseling | 2014

The reliability of a modified Kalamazoo Consensus Statement Checklist for assessing the communication skills of multidisciplinary clinicians in the simulated environment.

Eleanor B. Peterson; Aaron W. Calhoun; Elizabeth A. Rider

11,695. Mean program ratings ranged between 4.5/5 for Crisis Resource Management and 4.4/5 for communication skills training. Resident (2.6 h/y increase, P value <0.001) and nursing (2.2 h/y increase, P < 0.001) simulation hours increased significantly. Faculty involvement averaged between 3% and 32% of total work hours. Conclusion: This report demonstrates the feasibility of implementing an in situ simulation program using minimal permanent institutional space and cost-neutral redirected faculty time. This type of programmatic structure is conducive to short- and medium-term growth, is well received by participants, and allows for substantial cost savings. Future work will be needed to determine what growth limitations are inherent in this staffing and structural model.


Simulation in healthcare : journal of the Society for Simulation in Healthcare | 2013

Case and commentary: using simulation to address hierarchy issues during medical crises.

Aaron W. Calhoun; Megan C. Boone; Karen H. Miller; May C. M. Pian-Smith

OBJECTIVE Hierarchy, the unavoidable authority gradients that exist within and between clinical disciplines, can lead to significant patient harm in high-risk situations if not mitigated. High-fidelity simulation is a powerful means of addressing this issue in a reproducible manner, but participant psychological safety must be assured. Our institution experienced a hierarchy-related medication error that we subsequently addressed using simulation. The purpose of this article is to discuss the implementation and outcome of these simulations. METHODS Script and simulation flowcharts were developed to replicate the case. Each session included the use of faculty misdirection to precipitate the error. Care was taken to assure psychological safety via carefully conducted briefing and debriefing periods. Case outcomes were assessed using the validated Team Performance During Simulated Crises Instrument. Gap analysis was used to quantify team self-insight. Session content was analyzed via video review. RESULTS Five sessions were conducted (3 in the pediatric intensive care unit and 2 in the Pediatric Emergency Department). The team was unsuccessful at addressing the error in 4 (80%) of 5 cases. Trends toward lower communication scores (3.4/5 vs 2.3/5), as well as poor team self-assessment of communicative ability, were noted in unsuccessful sessions. Learners had a positive impression of the case. CONCLUSIONS Simulation is a useful means to replicate hierarchy error in an educational environment. This methodology was viewed positively by learner teams, suggesting that psychological safety was maintained. Teams that did not address the error successfully may have impaired self-assessment ability in the communication skill domain.


Patient Education and Counseling | 2010

Multi-rater feedback with gap analysis: an innovative means to assess communication skill and self-insight.

Aaron W. Calhoun; Elizabeth A. Rider; Eleanor B. Peterson; Elaine C. Meyer

OBJECTIVE With increased recognition of the importance of sound communication skills and communication skills education, reliable assessment tools are essential. This study reports on the psychometric properties of an assessment tool based on the Kalamazoo Consensus Statement Essential Elements Communication Checklist. METHODS The Gap-Kalamazoo Communication Skills Assessment Form (GKCSAF), a modified version of an existing communication skills assessment tool, the Kalamazoo Essential Elements Communication Checklist-Adapted, was used to assess learners in a multidisciplinary, simulation-based communication skills educational program using multiple raters. 118 simulated conversations were available for analysis. Internal consistency and inter-rater reliability were determined by calculating a Cronbachs alpha score and intra-class correlation coefficients (ICC), respectively. RESULTS The GKCSAF demonstrated high internal consistency with a Cronbachs alpha score of 0.844 (faculty raters) and 0.880 (peer observer raters), and high inter-rater reliability with an ICC of 0.830 (faculty raters) and 0.89 (peer observer raters). CONCLUSION The Gap-Kalamazoo Communication Skills Assessment Form is a reliable method of assessing the communication skills of multidisciplinary learners using multi-rater methods within the learning environment. PRACTICE IMPLICATIONS The Gap-Kalamazoo Communication Skills Assessment Form can be used by educational programs that wish to implement a reliable assessment and feedback system for a variety of learners.


Simulation in healthcare : journal of the Society for Simulation in Healthcare | 2015

Deception and Simulation Education: Issues, Concepts, and Commentary

Aaron W. Calhoun; May C. M. Pian-Smith; Robert D. Truog; David M. Gaba; Elaine C. Meyer

Summary Statement Medicine is hierarchical, and both positive and negative effects of this can be exposed and magnified during a crisis. Ideally, hierarchies function in an orderly manner, but when an inappropriate directive is given, the results can be disastrous unless team members are empowered to challenge the order. This article describes a case that uses misdirection and the possibility of simulated “death” to facilitate learning among experienced clinicians about the potentially deadly effects of an unchallenged, inappropriate order. The design of this case, however, raises additional questions regarding both ethics and psychological safety. The ethical concerns that surround the use of misdirection in simulation and the psychological ramifications of incorporating patient death in this context are explored in the commentary. We conclude with a discussion of debriefing strategies that can be used to promote psychological safety during potentially emotionally charged simulations and possible directions for future research.


Journal of Graduate Medical Education | 2011

A Multirater Instrument for the Assessment of Simulated Pediatric Crises

Aaron W. Calhoun; Megan C. Boone; Karen H. Miller; Rebecca L Taulbee; Vicki L. Montgomery; Kimberly A. Boland

OBJECTIVE Multi-rater assessment with gap analysis is a powerful method for assessing communication skills and self-insight, and enhancing self-reflection. We demonstrate the use of this methodology. METHODS The Program for the Approach to Complex Encounters (PACE) is an interdisciplinary simulation-based communication skills program. Encounters are assessed using an expanded Kalamazoo Consensus Statement Essential Elements Checklist adapted for multi-rater feedback and gap analysis. Data from a representative conversation were analyzed. RESULTS Likert and forced-choice data with gap analysis are used to assess performance. Participants were strong in Demonstrating Empathy and Providing Closure, and needed to improve Relationship Building, Gathering Information, and understanding the Patients/Familys Perspective. Participants under-appraised their abilities in Relationship Building, Providing Closure, and Demonstrating Empathy, as well as their overall performance. The conversion of these results into verbal feedback is discussed. CONCLUSIONS We describe an evaluation methodology using multi-rater assessment with gap analysis to assess communication skills and self-insight. This methodology enables faculty to identify undervalued skills and perceptual blind spots, provide comprehensive, data driven, feedback, and encourage reflection. PRACTICE IMPLICATIONS Implementation of graphical feedback forms coupled with one-on-one discussion using the above methodology has the potential to enhance trainee self-awareness and reflection, improving the impact of educational programs.


Simulation in healthcare : journal of the Society for Simulation in Healthcare | 2016

When the Mannequin Dies, Creation and Exploration of a Theoretical Framework Using a Mixed Methods Approach.

Shreepada Tripathy; Karen H. Miller; John W. Berkenbosch; Tara McKinley; Kimberly A. Boland; Seth Brown; Aaron W. Calhoun

Summary Statement The use of deceptive methodology in simulation education is an emerging ethical controversy. At the 2014 International Meeting on Simulation in Healthcare, arguments for and against its use were debated by simulation experts. What emerged from this discussion was an apparent disconnect between current practice and existing empiric research on this subject. At present, no framework exists to guide the simulation community’s exploration of this issue of deception. After reviewing the relevant psychological literature, we propose a framework delineating discrete elements and important relationships, which enables a comprehensive view of the factors germane to simulations that use deception. We further comment on key pedagogical and psychological issues in the context of this framework and define an agenda for further research. Educators are encouraged to use this framework when determining whether, when, and how deception might be used and, if used, how it can be ethically justified and carefully implemented.


American Journal of Critical Care | 2014

Using Simulation to Investigate the Impact of Hours Worked on Task Performance in an Intensive Care Unit

Aaron W. Calhoun; Megan C. Boone; Anna K. Dauer; Deborah R. Campbell; Vicki L. Montgomery

BACKGROUND Few validated instruments exist to measure pediatric code team skills. The goal of this study was to develop an instrument for the assessment of resuscitation competency and self-appraisal using multirater and gap analysis methodologies. METHODS Multirater assessment with gap analysis is a robust methodology that enables the measurement of self-appraisal as well as competency, offering faculty the ability to provide enhanced feedback. The Team Performance during Simulated Crises Instrument (TPDSCI) was grounded in the Accreditation Council for Graduate Medical Education competencies. The instrument contains 5 competencies, each assessed by a series of descriptive rubrics. It was piloted during a series of simulation-based interdisciplinary pediatric crisis resource management education sessions. Course faculty assessed participants, who also did self-assessments. Internal consistency and interrater reliability were analyzed using Cronbach α and intraclass correlation (ICC) statistics. Gap analysis results were examined descriptively. RESULTS Cronbach α for the instrument was between 0.72 and 0.69. The overall ICC was 0.82. ICC values for the medical knowledge, clinical skills, communication skills, and systems-based practice were between 0.87 and 0.72. The ICC for the professionalism domain was 0.22. Further examination of the professionalism competency revealed a positive skew, 43 simulated sessions (98%) had significant gaps for at least one of the competencies, 38 sessions (86%) had gaps indicating self-overappraisal, and 15 sessions (34%) had gaps indicating self-underappraisal. CONCLUSIONS The TPDSCI possesses good measures of internal consistency and interrater reliability with respect to medical knowledge, clinical skills, communication skills, systems-based practice, and overall competence in the context of simulated interdisciplinary pediatric medical crises. Professionalism remains difficult to assess. These results provide an encouraging first step toward instrument validation. Gap analysis reveals disparities between faculty and self-assessments that indicate inadequate participant self-reflection. Identifying self-overappraisal can facilitate focused interventions.

Collaboration


Dive into the Aaron W. Calhoun's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Elaine C. Meyer

Boston Children's Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Megan C. Boone

University of Louisville

View shared research outputs
Top Co-Authors

Avatar

Robert D. Truog

Boston Children's Hospital

View shared research outputs
Top Co-Authors

Avatar

Hichem Frigui

University of Louisville

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark Adler

Northwestern University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge