Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Benjamin Zendejas is active.

Publication


Featured researches published by Benjamin Zendejas.


JAMA | 2011

Technology-Enhanced Simulation for Health Professions Education: A Systematic Review and Meta-analysis

David A. Cook; Rose Hatala; Ryan Brydges; Benjamin Zendejas; Jason H. Szostek; Amy T. Wang; Patricia J. Erwin; Stanley J. Hamstra

CONTEXT Although technology-enhanced simulation has widespread appeal, its effectiveness remains uncertain. A comprehensive synthesis of evidence may inform the use of simulation in health professions education. OBJECTIVE To summarize the outcomes of technology-enhanced simulation training for health professions learners in comparison with no intervention. DATA SOURCE Systematic search of MEDLINE, EMBASE, CINAHL, ERIC, PsychINFO, Scopus, key journals, and previous review bibliographies through May 2011. STUDY SELECTION Original research in any language evaluating simulation compared with no intervention for training practicing and student physicians, nurses, dentists, and other health care professionals. DATA EXTRACTION Reviewers working in duplicate evaluated quality and abstracted information on learners, instructional design (curricular integration, distributing training over multiple days, feedback, mastery learning, and repetitive practice), and outcomes. We coded skills (performance in a test setting) separately for time, process, and product measures, and similarly classified patient care behaviors. DATA SYNTHESIS From a pool of 10,903 articles, we identified 609 eligible studies enrolling 35,226 trainees. Of these, 137 were randomized studies, 67 were nonrandomized studies with 2 or more groups, and 405 used a single-group pretest-posttest design. We pooled effect sizes using random effects. Heterogeneity was large (I(2)>50%) in all main analyses. In comparison with no intervention, pooled effect sizes were 1.20 (95% CI, 1.04-1.35) for knowledge outcomes (n = 118 studies), 1.14 (95% CI, 1.03-1.25) for time skills (n = 210), 1.09 (95% CI, 1.03-1.16) for process skills (n = 426), 1.18 (95% CI, 0.98-1.37) for product skills (n = 54), 0.79 (95% CI, 0.47-1.10) for time behaviors (n = 20), 0.81 (95% CI, 0.66-0.96) for other behaviors (n = 50), and 0.50 (95% CI, 0.34-0.66) for direct effects on patients (n = 32). Subgroup analyses revealed no consistent statistically significant interactions between simulation training and instructional design features or study quality. CONCLUSION In comparison with no intervention, technology-enhanced simulation training in health professions education is consistently associated with large effects for outcomes of knowledge, skills, and behaviors and moderate effects for patient-related outcomes.


Medical Teacher | 2013

Comparative effectiveness of instructional design features in simulation-based education: Systematic review and meta-analysis

David A. Cook; Stanley J. Hamstra; Ryan Brydges; Benjamin Zendejas; Jason H. Szostek; Amy T. Wang; Patricia J. Erwin; Rose Hatala

Background: Although technology-enhanced simulation is increasingly used in health professions education, features of effective simulation-based instructional design remain uncertain. Aims: Evaluate the effectiveness of instructional design features through a systematic review of studies comparing different simulation-based interventions. Methods: We systematically searched MEDLINE, EMBASE, CINAHL, ERIC, PsycINFO, Scopus, key journals, and previous review bibliographies through May 2011. We included original research studies that compared one simulation intervention with another and involved health professions learners. Working in duplicate, we evaluated study quality and abstracted information on learners, outcomes, and instructional design features. We pooled results using random effects meta-analysis. Results: From a pool of 10 903 articles we identified 289 eligible studies enrolling 18 971 trainees, including 208 randomized trials. Inconsistency was usually large (I 2 > 50%). For skills outcomes, pooled effect sizes (positive numbers favoring the instructional design feature) were 0.68 for range of difficulty (20 studies; p < 0.001), 0.68 for repetitive practice (7 studies; p = 0.06), 0.66 for distributed practice (6 studies; p = 0.03), 0.65 for interactivity (89 studies; p < 0.001), 0.62 for multiple learning strategies (70 studies; p < 0.001), 0.52 for individualized learning (59 studies; p < 0.001), 0.45 for mastery learning (3 studies; p = 0.57), 0.44 for feedback (80 studies; p < 0.001), 0.34 for longer time (23 studies; p = 0.005), 0.20 for clinical variation (16 studies; p = 0.24), and −0.22 for group training (8 studies; p = 0.09). Conclusions: These results confirm quantitatively the effectiveness of several instructional design features in simulation-based education.


Annals of Surgery | 2011

Simulation-based mastery learning improves patient outcomes in laparoscopic inguinal hernia repair: A randomized controlled trial

Benjamin Zendejas; David A. Cook; Juliane Bingener; Marianne Huebner; William F. Dunn; Michael G. Sarr; David R. Farley

Objective:To evaluate a mastery learning, simulation-based curriculum for laparoscopic, totally extraperitoneal (TEP) inguinal hernia repair. Background:Clinically relevant benefits from improvements in operative performance, time, and errors after simulation-based training are not clearly established. Methods:After performing a baseline TEP in the OR, general surgery residents randomized to mastery learning (ML) or standard practice (SP) were reassessed during subsequent TEPs. The ML curriculum involved Web-based modules followed by training on a TEP simulator until expert performance was achieved. Operative time, performance, and patient outcomes adjusted for staff, resident participation, difficulty of repair, PGY-level, and patient comorbidities were compared between groups with mixed effects-ANOVA and generalized linear models. Results:Fifty residents (PGY1-5) performed 219 TEP repairs on 146 patients. Baseline operative time, performance, and demographics were similar between groups. To achieve mastery, ML-residents (n = 26) required a median of 16 (range 7–27) simulated repairs. After training, TEPs performed by ML-residents were faster than those by SP-residents, with time corrected for participation (mean ± SD, 34 ± 8 minutes vs. 48 ± 14 minutes; difference –13; 95%CI, –18 to –8; P < 0.001). Operative performance scores (GOALS, scale 6–30) were better for ML residents (21.9 ± 2.8 vs. 18.3 ± 3.8; P = 0.001). Intraoperative complications (peritoneal tear, procedure conversion), postoperative complications (urinary retention, seroma), and need for overnight stay were less likely in the ML group (adjusted odds ratios 0.14, 0.04, and 0, respectively; all P < 0.05). Conclusions:A simulation-based ML curriculum decreased operative time, improved trainee performance, and decreased intra- and postoperative complications and overnight stays after laparoscopic TEP inguinal hernia repair. ClinicalTrials.gov Identifier: NCT01085500


Academic Medicine | 2014

Reconsidering fidelity in simulation-based training.

Stanley J. Hamstra; Ryan Brydges; Rose Hatala; Benjamin Zendejas; David A. Cook

In simulation-based health professions education, the concept of simulator fidelity is usually understood as the degree to which a simulator looks, feels, and acts like a human patient. Although this can be a useful guide in designing simulators, this definition emphasizes technological advances and physical resemblance over principles of educational effectiveness. In fact, several empirical studies have shown that the degree of fidelity appears to be independent of educational effectiveness. The authors confronted these issues while conducting a recent systematic review of simulation-based health professions education, and in this Perspective they use their experience in conducting that review to examine key concepts and assumptions surrounding the topic of fidelity in simulation.Several concepts typically associated with fidelity are more useful in explaining educational effectiveness, such as transfer of learning, learner engagement, and suspension of disbelief. Given that these concepts more directly influence properties of the learning experience, the authors make the following recommendations: (1) abandon the term fidelity in simulation-based health professions education and replace it with terms reflecting the underlying primary concepts of physical resemblance and functional task alignment; (2) make a shift away from the current emphasis on physical resemblance to a focus on functional correspondence between the simulator and the applied context; and (3) focus on methods to enhance educational effectiveness using principles of transfer of learning, learner engagement, and suspension of disbelief. These recommendations clarify underlying concepts for researchers in simulation-based health professions education and will help advance this burgeoning field.


Surgery | 2013

Cost: The missing outcome in simulation-based medical education research: A systematic review

Benjamin Zendejas; Amy T. Wang; Ryan Brydges; Stanley J. Hamstra; David A. Cook

BACKGROUND The costs involved with technology-enhanced simulation remain unknown. Appraising the value of simulation-based medical education (SBME) requires complete accounting and reporting of cost. We sought to summarize the quantity and quality of studies that contain an economic analysis of SBME for the training of health professions learners. METHODS We performed a systematic search of MEDLINE, EMBASE, CINAHL, ERIC, PsychINFO, Scopus, key journals, and previous review bibliographies through May 2011. Articles reporting original research in any language evaluating the cost of simulation, in comparison with nonstimulation instruction or another simulation intervention, for training practicing and student physicians, nurses, and other health professionals were selected. Reviewers working in duplicate evaluated study quality and abstracted information on learners, instructional design, cost elements, and outcomes. RESULTS From a pool of 10,903 articles we identified 967 comparative studies. Of these, 59 studies (6.1%) reported any cost elements and 15 (1.6%) provided information on cost compared with another instructional approach. We identified 11 cost components reported, most often the cost of the simulator (n = 42 studies; 71%) and training materials (n = 21; 36%). Ten potential cost components were never reported. The median number of cost components reported per study was 2 (range, 1-9). Only 12 studies (20%) reported cost in the Results section; most reported it in the Discussion (n = 34; 58%). CONCLUSION Cost reporting in SBME research is infrequent and incomplete. We propose a comprehensive model for accounting and reporting costs in SBME.


Simulation in healthcare : journal of the Society for Simulation in Healthcare | 2012

Comparative effectiveness of technology-enhanced simulation versus other instructional methods: a systematic review and meta-analysis.

David A. Cook; Ryan Brydges; Stanley J. Hamstra; Benjamin Zendejas; Jason H. Szostek; Amy T. Wang; Patricia J. Erwin; Rose Hatala

Abstract To determine the comparative effectiveness of technology-enhanced simulation, we summarized the results of studies comparing technology-enhanced simulation training with nonsimulation instruction for health professions learners. We systematically searched databases including MEDLINE, Embase, and Scopus through May 2011 for relevant articles. Working in duplicate, we abstracted information on instructional design, outcomes, and study quality. From 10,903 candidate articles, we identified 92 eligible studies. In random-effects meta-analysis, pooled effect sizes (positive numbers favoring simulation) were as follows: satisfaction outcomes, 0.59 (95% confidence interval, 0.36–0.81; n = 20 studies); knowledge, 0.30 (0.16–0.43; n = 42); time measure of skills, 0.33 (0.00–0.66; n = 14); process measure of skills, 0.38 (0.24–0.52; n = 51); product measure of skills, 0.66 (0.30–1.02; n = 11); time measure of behavior, 0.56 (−0.07 to 1.18; n = 7); process measure of behavior, 0.77 (−0.13 to 1.66; n = 11); and patient effects, 0.36 (−0.06 to 0.78; n = 9). For 5 studies reporting comparative costs, simulation was more expensive and more effective. In summary, in comparison with other instruction, technology-enhanced simulation is associated with small to moderate positive effects.


Annals of Surgery | 2013

State of the evidence on simulation-based training for laparoscopic surgery: a systematic review.

Benjamin Zendejas; Ryan Brydges; Stanley J. Hamstra; David A. Cook

Objective:Summarize the outcomes and best practices of simulation training for laparoscopic surgery. Background:Simulation-based training for laparoscopic surgery has become a mainstay of surgical training. Much new evidence has accrued since previous reviews were published. Methods:We systematically searched the literature through May 2011 for studies evaluating simulation, in comparison with no intervention or an alternate training activity, for training health professionals in laparoscopic surgery. Outcomes were classified as satisfaction, skills (in a test setting) of time (to perform the task), process (eg, performance rating), product (eg, knot strength), and behaviors when caring for patients. We used random effects to pool effect sizes. Results:From 10,903 articles screened, we identified 219 eligible studies enrolling 7138 trainees, including 91 (42%) randomized trials. For comparisons with no intervention (n = 151 studies), pooled effect size (ES) favored simulation for outcomes of knowledge (1.18; N = 9 studies), skills time (1.13; N = 89), skills process (1.23; N = 114), skills product (1.09; N = 7), behavior time (1.15; N = 7), behavior process (1.22; N = 15), and patient effects (1.28; N = 1), all P < 0.05. When compared with nonsimulation instruction (n = 3 studies), results significantly favored simulation for outcomes of skills time (ES, 0.75) and skills process (ES, 0.54). Comparisons between different simulation interventions (n = 79 studies) clarified best practices. For example, in comparison with virtual reality, box trainers have similar effects for process skills outcomes and seem to be superior for outcomes of satisfaction and skills time. Conclusions:Simulation-based laparoscopic surgery training of health professionals has large benefits when compared with no intervention and is moderately more effective than nonsimulation instruction.


Medical Education | 2014

Debriefing for technology-enhanced simulation: A systematic review and meta-analysis

Adam Cheng; Walter Eppich; Vincent Grant; Jonathan Sherbino; Benjamin Zendejas; David A. Cook

Debriefing is a common feature of technology‐enhanced simulation (TES) education. However, evidence for its effectiveness remains unclear. We sought to characterise how debriefing is reported in the TES literature, identify debriefing features that are associated with improved outcomes, and evaluate the effectiveness of debriefing when combined with TES.


Academic Medicine | 2013

Mastery learning for health professionals using technology-enhanced simulation: a systematic review and meta-analysis.

David A. Cook; Ryan Brydges; Benjamin Zendejas; Stanley J. Hamstra; Rose Hatala

Purpose Competency-based education requires individualization of instruction. Mastery learning, an instructional approach requiring learners to achieve a defined proficiency before proceeding to the next instructional objective, offers one approach to individualization. The authors sought to summarize the quantitative outcomes of mastery learning simulation-based medical education (SBME) in comparison with no intervention and nonmastery instruction, and to determine what features of mastery SBME make it effective. Method The authors searched MEDLINE, EMBASE, CINAHL, ERIC, PsycINFO, Scopus, key journals, and previous review bibliographies through May 2011. They included original research in any language evaluating mastery SBME, in comparison with any intervention or no intervention, for practicing and student physicians, nurses, and other health professionals. Working in duplicate, they abstracted information on trainees, instructional design (interactivity, feedback, repetitions, and learning time), study design, and outcomes. Results They identified 82 studies evaluating mastery SBME. In comparison with no intervention, mastery SBME was associated with large effects on skills (41 studies; effect size [ES] 1.29 [95% confidence interval, 1.08–1.50]) and moderate effects on patient outcomes (11 studies; ES 0.73 [95% CI, 0.36–1.10]). In comparison with nonmastery SBME instruction, mastery learning was associated with large benefit in skills (3 studies; effect size 1.17 [95% CI, 0.29–2.05]) but required more time. Pretraining and additional practice improved outcomes but, again, took longer. Studies exploring enhanced feedback and self-regulated learning in the mastery model showed mixed results. Conclusions Limited evidence suggests that mastery learning SBME is superior to nonmastery instruction but takes more time.


Academic Medicine | 2013

Technology-enhanced simulation to assess health professionals: a systematic review of validity evidence, research methods, and reporting quality.

David A. Cook; Ryan Brydges; Benjamin Zendejas; Stanley J. Hamstra; Rose Hatala

Purpose To summarize the tool characteristics, sources of validity evidence, methodological quality, and reporting quality for studies of technology-enhanced simulation-based assessments for health professions learners. Method The authors conducted a systematic review, searching MEDLINE, EMBASE, CINAHL, ERIC, PsychINFO, Scopus, key journals, and previous reviews through May 2011. They selected original research in any language evaluating simulation-based assessment of practicing and student physicians, nurses, and other health professionals. Reviewers working in duplicate evaluated validity evidence using Messick’s five-source framework; methodological quality using the Medical Education Research Study Quality Instrument and the revised Quality Assessment of Diagnostic Accuracy Studies; and reporting quality using the Standards for Reporting Diagnostic Accuracy and Guidelines for Reporting Reliability and Agreement Studies. Results Of 417 studies, 350 (84%) involved physicians at some stage in training. Most focused on procedural skills, including minimally invasive surgery (N = 142), open surgery (81), and endoscopy (67). Common elements of validity evidence included relations with trainee experience (N = 306), content (142), relations with other measures (128), and interrater reliability (124). Of the 217 studies reporting more than one element of evidence, most were judged as having high or unclear risk of bias due to selective sampling (N = 192) or test procedures (132). Only 64% proposed a plan for interpreting the evidence to be presented (validity argument). Conclusions Validity evidence for simulation-based assessments is sparse and is concentrated within specific specialties, tools, and sources of validity evidence. The methodological and reporting quality of assessment studies leaves much room for improvement.

Collaboration


Dive into the Benjamin Zendejas's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rose Hatala

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge