Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Emil R. Petrusa is active.

Publication


Featured researches published by Emil R. Petrusa.


Medical Teacher | 2005

Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME systematic review*

S. Barry Issenberg; William C. McGaghie; Emil R. Petrusa; David Lee Gordon; Ross J. Scalese

Review date: 1969 to 2003, 34 years. Background and context: Simulations are now in widespread use in medical education and medical personnel evaluation. Outcomes research on the use and effectiveness of simulation technology in medical education is scattered, inconsistent and varies widely in methodological rigor and substantive focus. Objectives: Review and synthesize existing evidence in educational science that addresses the question, ‘What are the features and uses of high-fidelity medical simulations that lead to most effective learning?’. Search strategy: The search covered five literature databases (ERIC, MEDLINE, PsycINFO, Web of Science and Timelit) and employed 91 single search terms and concepts and their Boolean combinations. Hand searching, Internet searches and attention to the ‘grey literature’ were also used. The aim was to perform the most thorough literature search possible of peer-reviewed publications and reports in the unpublished literature that have been judged for academic quality. Inclusion and exclusion criteria: Four screening criteria were used to reduce the initial pool of 670 journal articles to a focused set of 109 studies: (a) elimination of review articles in favor of empirical studies; (b) use of a simulator as an educational assessment or intervention with learner outcomes measured quantitatively; (c) comparative research, either experimental or quasi-experimental; and (d) research that involves simulation as an educational intervention. Data extraction: Data were extracted systematically from the 109 eligible journal articles by independent coders. Each coder used a standardized data extraction protocol. Data synthesis: Qualitative data synthesis and tabular presentation of research methods and outcomes were used. Heterogeneity of research designs, educational interventions, outcome measures and timeframe precluded data synthesis using meta-analysis. Headline results: Coding accuracy for features of the journal articles is high. The extant quality of the published research is generally weak. The weight of the best available evidence suggests that high-fidelity medical simulations facilitate learning under the right conditions. These include the following: providing feedback—51 (47%) journal articles reported that educational feedback is the most important feature of simulation-based medical education; repetitive practice—43 (39%) journal articles identified repetitive practice as a key feature involving the use of high-fidelity simulations in medical education; curriculum integration—27 (25%) journal articles cited integration of simulation-based exercises into the standard medical school or postgraduate educational curriculum as an essential feature of their effective use; range of difficulty level—15 (14%) journal articles address the importance of the range of task difficulty level as an important variable in simulation-based medical education; multiple learning strategies—11 (10%) journal articles identified the adaptability of high-fidelity simulations to multiple learning strategies as an important factor in their educational effectiveness; capture clinical variation—11 (10%) journal articles cited simulators that capture a wide variety of clinical conditions as more useful than those with a narrow range; controlled environment—10 (9%) journal articles emphasized the importance of using high-fidelity simulations in a controlled environment where learners can make, detect and correct errors without adverse consequences; individualized learning—10 (9%) journal articles highlighted the importance of having reproducible, standardized educational experiences where learners are active participants, not passive bystanders; defined outcomes—seven (6%) journal articles cited the importance of having clearly stated goals with tangible outcome measures that will more likely lead to learners mastering skills; simulator validity—four (3%) journal articles provided evidence for the direct correlation of simulation validity with effective learning. Conclusions: While research in this field needs improvement in terms of rigor and quality, high-fidelity medical simulations are educationally effective and simulation-based education complements medical education in patient care settings.


Medical Education | 2010

A critical review of simulation-based medical education research: 2003-2009.

William C. McGaghie; S. Barry Issenberg; Emil R. Petrusa; Ross J. Scalese

Objectives  This article reviews and critically evaluates historical and contemporary research on simulation‐based medical education (SBME). It also presents and discusses 12 features and best practices of SBME that teachers should know in order to use medical simulation technology to maximum educational benefit.


Medical Education | 2006

Effect of practice on standardised learning outcomes in simulation-based medical education

William C. McGaghie; S. Barry Issenberg; Emil R. Petrusa; Ross J. Scalese

Objectives  This report synthesises a subset of 31 journal articles on high‐fidelity simulation‐based medical education containing 32 research studies drawn from a larger qualitative review published previously. These studies were selected because they present adequate data to allow for quantitative synthesis. We hypothesised an association between hours of practice in simulation‐based medical education and standardised learning outcomes measured as weighted effect sizes.


Journal of Surgical Education | 2014

Reliability, Validity, and Feasibility of the Zwisch Scale for the Assessment of Intraoperative Performance

Brian C. George; Ezra N. Teitelbaum; Shari L. Meyerson; Mary C. Schuller; Debra A. DaRosa; Emil R. Petrusa; Lucia C. Petito; Jonathan P. Fryer

PURPOSE The existing methods for evaluating resident operative performance interrupt the workflow of the attending physician, are resource intensive, and are often completed well after the end of the procedure in question. These limitations lead to low faculty compliance and potential significant recall bias. In this study, we deployed a smartphone-based system, the Procedural Autonomy and Supervisions System, to facilitate assessment of resident performance according to the Zwisch scale with minimal workflow disruption. We aimed to demonstrate that this is a reliable, valid, and feasible method of measuring resident operative autonomy. METHODS Before implementation, general surgery residents and faculty underwent frame-of-reference training to the Zwisch scale. Immediately after any operation in which a resident participated, the system automatically sent a text message prompting the attending physician to rate the residents level of operative autonomy according to the 4-level Zwisch scale. Of these procedures, 8 were videotaped and independently rated by 2 additional surgeons. The Zwisch ratings of the 3 raters were compared using an intraclass correlation coefficient. Videotaped procedures were also scored using 2 alternative operating room (OR) performance assessment instruments (Operative Performance Rating System and Ottawa Surgical Competency OR Evaluation), against which the item correlations were calculated. RESULTS Between December 2012 and June 2013, 27 faculty used the smartphone system to complete 1490 operative performance assessments on 31 residents. During this period, faculty completed evaluations for 92% of all operations performed with general surgery residents. The Zwisch scores were shown to correlate with postgraduate year (PGY) levels based on sequential pairwise chi-squared tests: PGY 1 vs PGY 2 (χ(2) = 106.9, df = 3, p < 0.001); PGY 2 vs PGY 3 (χ(2) = 22.2, df = 3, p < 0.001); and PGY 3 vs PGY 4 (χ(2) = 56.4, df = 3, p < 0.001). Comparison of PGY 4 to PGY 5 scores were not significantly different (χ(2) = 4.5, df = 3, p = 0.21). For the 8 operations reviewed for interrater reliability, the intraclass correlation coefficient was 0.90 (95% CI: 0.72-0.98, p < 0.01). Correlation of Procedural Autonomy and Supervisions System ratings with both Operative Performance Rating System items (each r > 0.90, all ps < 0.01) and Ottawa Surgical Competency OR Evaluation items (each r > 0.86, all ps < 0.01) was high. CONCLUSIONS The Zwisch scale can be used to make reliable and valid measurements of faculty guidance and resident autonomy. Our data also suggest that Zwisch ratings may be used to infer resident operative performance. Deployed on an automated smartphone-based system, it can be used to feasibly record evaluations for most operations performed by residents. This information can be used to council individual residents, modify programmatic curricula, and potentially inform national training guidelines.


Archive | 2002

Clinical Performance Assessments

Emil R. Petrusa

Evaluation of clinical performance for physicians in training is central to assuring qualified practitioners. The time-honored method of oral examination after a single patient suffers from several measurement shortcomings. Too little sampling, low reliability, partial validity and potential for evaluator bias undermine the oral examination. Since 1975, standardized clinical examinations have developed to provide broader sampling, more objective evaluation criteria and more efficient administration. Research supports reliability of portrayal and data capture by standardized patients as well as the predictability of future trainee performance. Methods for setting pass marks for cases and the whole test have evolved from those for written examinations. Pass marks from all methods continue to fail an unacceptably high number of learners without additional adjustments. Studies show a positive impact of these examinations on learner study behaviors and on the number of direct observations of learners’ patient encounters. Standardized clinical performance examinations are sensitive and specific for benefits of a structured clinical curriculum. Improvements must include better alignment of a test’s purpose, measurement framework and scoring. Data capture methods for clinical performance at advanced levels need development. Checklists completed by standardized patients do not capture the organization or approach a learner takes in the encounter. Global ratings completed by faculty hold promise but more work is needed. Future studies should investigate the validity of case and test-wise pass marks. Finally research on the development of expertise should guide the next generation of assessment tasks, encounters and scoring in standardized clinical examinations.


Medical Teacher | 2009

Assessing teamwork in medical education and practice: relating behavioural teamwork ratings and clinical performance.

Melanie C. Wright; Barbara Phillips-Bute; Emil R. Petrusa; Kathleen L. Griffin; Gene Hobbs; Jeffrey M. Taekman

Background: Problems with communication and team coordination are frequently linked to adverse events in medicine. However, there is little experimental evidence to support a relationship between observer ratings of teamwork skills and objective measures of clinical performance. Aim: Our main objective was to test the hypothesis that observer ratings of team skill will correlate with objective measures of clinical performance. Methods: Nine teams of medical students were videotaped performing two types of teamwork tasks: (1) low fidelity classroom-based patient assessment and (2) high fidelity simulated emergent care. Observers used a behaviourally anchored rating scale to rate each individual on skills representative of assertiveness, decision-making, situation assessment, leadership, and communication. A checklist-based measure was used to assess clinical team performance. Results: Moderate to high inter-observer correlations and moderate correlations between cases established the validity of a behaviourally anchored team skill rating tool for simulated emergent care. There was moderate to high correlation between observer ratings of team skill and checklist-based measures of team performance for the simulated emergent care cases (r = 0.65, p = 0.06 and r = 0.97, p < 0.0001). Conclusions: These results provide prospective evidence of a positive relationship between observer ratings of team skills and clinical team performance in a simulated dynamic health care task.


Academic Medicine | 2010

Medical Students' Experiences of Moral Distress: Development of a Web-Based Survey

Catherine Wiggleton; Emil R. Petrusa; Kim Loomis; John L. Tarpley; Margaret J. Tarpley; Mary Lou OʼGorman; Bonnie M. Miller

Purpose To develop an instrument for measuring moral distress in medical students, measuring the prevalence of moral distress in a cohort of students, and identifying the situations most likely to cause it. Moral distress, defined as the negative feelings that arise when one knows the morally correct thing to do but cannot act because of constraints or hierarchies, has been documented in nurses but has not been measured in medical students. Method The authors constructed a survey consisting of 55 items describing potentially distressing situations. Responders rated the frequency of these situations and the intensity of distress that they caused. The survey was administered to 106 fourth-year medical students during a three-week period in 2007; the response rate was 60%. Results Each of the situations was experienced by at least some of the 64 respondents, and each created some degree of moral distress. On average, students witnessed almost one-half of the situations at least once, and more than one-third of the situations caused mild-to-moderate distress. The survey measured individual distress (Cronbach alpha = 0.95), which varied among the students. Whereas women witnessed potentially distressing situations significantly more frequently than did men (P = .04), men tended to become more distressed by each event witnessed (P = .057). Conclusions Medical students frequently experience moral distress. Our survey can be used to measure aspects of the learning environment as well as individual responses to the environment. The variation found among student responses warrants further investigation to determine whether students at either extreme of moral distress are at risk of burnout or erosion of professionalism.


Critical Care Medicine | 2007

Debriefing in the intensive care unit: a feedback tool to facilitate bedside teaching.

Alison S. Clay; Loretta G. Que; Emil R. Petrusa; Mark Sebastian; Joseph A. Govert

Objective:To develop an assessment tool for bedside teaching in the intensive care unit (ICU) that provides feedback to residents about their performance compared with clinical best practices. Method:We reviewed the literature on the assessment of resident clinical performance in critical care medicine and summarized the strengths and weaknesses of these assessments. Using debriefing after simulation as a model, we created five checklists for different situations encountered in the ICU—areas that encompass different Accreditation Council for Graduate Medical Education core competencies. Checklists were designed to incorporate clinical best practices as defined by the literature and institutional practices as defined by the critical care professionals working in our ICUs. Checklists were used at the beginning of the rotation to explicitly define our expectations to residents and were used during the rotation after a clinical encounter by the resident and supervising physician to review a resident’s performance and to provide feedback to the resident on the accuracy of the resident’s self-assessment of his or her performance. Results:Five “best practice” checklists were developed: central catheter placement, consultation, family discussions, resuscitation of hemorrhagic shock, and resuscitation of septic shock. On average, residents completed 2.6 checklists per rotation. Use of the cards was fairly evenly distributed, with the exception of resuscitation of hemorrhagic shock, which occurs less frequently than the other encounters in the medical ICU. Those who used more debriefing cards had higher fellow and faculty evaluations. Residents felt that debriefing cards were a useful learning tool in the ICU. Conclusions:Debriefing sessions using checklists can be successfully implemented in ICU rotations. Checklists can be used to assess both resident performance and consistency of practice with respect to published standards of care in critical care medicine.


Medical Teacher | 2005

Stroke training of prehospital providers: an example of simulation-enhanced blended learning and evaluation

David Lee Gordon; S. Barry Issenberg; Michael S. Gordon; David M LaCombe; William C. McGaghie; Emil R. Petrusa

Since appropriate treatment of patients in the first few hours of ischemic stroke may decrease the risk of long-term disability, prehospital providers should recognize, assess, manage and communicate about stroke patients in an effective and time-efficient manner. This requires the instruction and evaluation of a wide range of competencies including clinical skills, patient investigation and management and communication skills. The authors developed and assessed the effectiveness of a simulation-enhanced stroke course that incorporates several different learning strategies to evaluate competencies in the care of acute stroke patients. The one-day, interactive, emergency stroke course features a simulation-enhanced, blended-learning approach that includes didactic lectures, tabletop exercises, and focused-examination training and small-group sessions led by paramedic instructors as standardized patients portraying five key neurological syndromes. From January to October 2000, 345 learners were assessed using multiple-choice tests as were randomly selected group of 73 learners using skills’ checklists during two pre- and two post-course simulated patient encounters. Among all learners there was a significant gain in knowledge (pre: 53.9% ± 13.9 and post: 85.4% ± 8.5; p < 0.001), and for the 73 learners a significant improvement in their clinical and communication skills (p < 0.0001 for all). By using a simulation-enhanced, blended-learning approach, pre-hospital paraprofessionals were successfully trained and evaluated in a wide range of competences that will lead to the more improved recognition and management of acute stroke patients.


Academic Medicine | 2011

Tracking development of clinical reasoning ability across five medical schools using a progress test.

Reed G. Williams; Debra L. Klamen; Christopher B. White; Emil R. Petrusa; Ruth Marie E Fincher; Carol F. Whitfield; John H. Shatzer; Teresita McCarty; Bonnie M. Miller

Purpose Little is known about the acquisition of clinical reasoning skills in medical school, the development of clinical reasoning over the medical curriculum as a whole, and the impact of various curricular methodologies on these skills. This study investigated (1) whether there are differences in clinical reasoning skills between learners at different years of medical school, and (2) whether there are differences in performance between students at schools with various curricular methodologies. Method Students (n = 2,394) who had completed zero to three years of medical school at five U.S. medical schools participated in a cross-sectional study in 2008. Students took the same diagnostic pattern recognition (DPR) and clinical data interpretation (CDI) tests. Percent correct scores were used to determine performance differences. Data from all schools and students at all levels were aggregated for further analysis. Results Student performance increased substantially as a result of each year of training. Gains in DPR and CDI performance during the third year of medical school were not as great as in previous years across the five schools. CDI performance and performance gains were lower than DPR performance and gains. Performance gains attributable to training at each of the participating medical schools were more similar than different. Conclusions Years of training accounted for most of the variation in DPR and CDI performance. As a rule, students at higher training levels performed better on both tests, though the expected larger gains during the third year of medical school did not materialize.

Collaboration


Dive into the Emil R. Petrusa's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge