Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rose Hatala is active.

Publication


Featured researches published by Rose Hatala.


JAMA | 2011

Technology-Enhanced Simulation for Health Professions Education: A Systematic Review and Meta-analysis

David A. Cook; Rose Hatala; Ryan Brydges; Benjamin Zendejas; Jason H. Szostek; Amy T. Wang; Patricia J. Erwin; Stanley J. Hamstra

CONTEXT Although technology-enhanced simulation has widespread appeal, its effectiveness remains uncertain. A comprehensive synthesis of evidence may inform the use of simulation in health professions education. OBJECTIVE To summarize the outcomes of technology-enhanced simulation training for health professions learners in comparison with no intervention. DATA SOURCE Systematic search of MEDLINE, EMBASE, CINAHL, ERIC, PsychINFO, Scopus, key journals, and previous review bibliographies through May 2011. STUDY SELECTION Original research in any language evaluating simulation compared with no intervention for training practicing and student physicians, nurses, dentists, and other health care professionals. DATA EXTRACTION Reviewers working in duplicate evaluated quality and abstracted information on learners, instructional design (curricular integration, distributing training over multiple days, feedback, mastery learning, and repetitive practice), and outcomes. We coded skills (performance in a test setting) separately for time, process, and product measures, and similarly classified patient care behaviors. DATA SYNTHESIS From a pool of 10,903 articles, we identified 609 eligible studies enrolling 35,226 trainees. Of these, 137 were randomized studies, 67 were nonrandomized studies with 2 or more groups, and 405 used a single-group pretest-posttest design. We pooled effect sizes using random effects. Heterogeneity was large (I(2)>50%) in all main analyses. In comparison with no intervention, pooled effect sizes were 1.20 (95% CI, 1.04-1.35) for knowledge outcomes (n = 118 studies), 1.14 (95% CI, 1.03-1.25) for time skills (n = 210), 1.09 (95% CI, 1.03-1.16) for process skills (n = 426), 1.18 (95% CI, 0.98-1.37) for product skills (n = 54), 0.79 (95% CI, 0.47-1.10) for time behaviors (n = 20), 0.81 (95% CI, 0.66-0.96) for other behaviors (n = 50), and 0.50 (95% CI, 0.34-0.66) for direct effects on patients (n = 32). Subgroup analyses revealed no consistent statistically significant interactions between simulation training and instructional design features or study quality. CONCLUSION In comparison with no intervention, technology-enhanced simulation training in health professions education is consistently associated with large effects for outcomes of knowledge, skills, and behaviors and moderate effects for patient-related outcomes.


Medical Teacher | 2013

Comparative effectiveness of instructional design features in simulation-based education: Systematic review and meta-analysis

David A. Cook; Stanley J. Hamstra; Ryan Brydges; Benjamin Zendejas; Jason H. Szostek; Amy T. Wang; Patricia J. Erwin; Rose Hatala

Background: Although technology-enhanced simulation is increasingly used in health professions education, features of effective simulation-based instructional design remain uncertain. Aims: Evaluate the effectiveness of instructional design features through a systematic review of studies comparing different simulation-based interventions. Methods: We systematically searched MEDLINE, EMBASE, CINAHL, ERIC, PsycINFO, Scopus, key journals, and previous review bibliographies through May 2011. We included original research studies that compared one simulation intervention with another and involved health professions learners. Working in duplicate, we evaluated study quality and abstracted information on learners, outcomes, and instructional design features. We pooled results using random effects meta-analysis. Results: From a pool of 10 903 articles we identified 289 eligible studies enrolling 18 971 trainees, including 208 randomized trials. Inconsistency was usually large (I 2 > 50%). For skills outcomes, pooled effect sizes (positive numbers favoring the instructional design feature) were 0.68 for range of difficulty (20 studies; p < 0.001), 0.68 for repetitive practice (7 studies; p = 0.06), 0.66 for distributed practice (6 studies; p = 0.03), 0.65 for interactivity (89 studies; p < 0.001), 0.62 for multiple learning strategies (70 studies; p < 0.001), 0.52 for individualized learning (59 studies; p < 0.001), 0.45 for mastery learning (3 studies; p = 0.57), 0.44 for feedback (80 studies; p < 0.001), 0.34 for longer time (23 studies; p = 0.005), 0.20 for clinical variation (16 studies; p = 0.24), and −0.22 for group training (8 studies; p = 0.09). Conclusions: These results confirm quantitatively the effectiveness of several instructional design features in simulation-based education.


JAMA | 2014

How to Read a Systematic Review and Meta-analysis and Apply the Results to Patient Care: Users’ Guides to the Medical Literature

Mohammad Hassan Murad; Victor M. Montori; John P. A. Ioannidis; Roman Jaeschke; Philip J. Devereaux; Kameshwar Prasad; Ignacio Neumann; Alonso Carrasco-Labra; Thomas Agoritsas; Rose Hatala; Maureen O. Meade; Peter C. Wyer; Deborah J. Cook; Gordon H. Guyatt

Clinical decisions should be based on the totality of the best evidence and not the results of individual studies. When clinicians apply the results of a systematic review or meta-analysis to patient care, they should start by evaluating the credibility of the methods of the systematic review, ie, the extent to which these methods have likely protected against misleading results. Credibility depends on whether the review addressed a sensible clinical question; included an exhaustive literature search; demonstrated reproducibility of the selection and assessment of studies; and presented results in a useful manner. For reviews that are sufficiently credible, clinicians must decide on the degree of confidence in the estimates that the evidence warrants (quality of evidence). Confidence depends on the risk of bias in the body of evidence; the precision and consistency of the results; whether the results directly apply to the patient of interest; and the likelihood of reporting bias. Shared decision making requires understanding of the estimates of magnitude of beneficial and harmful effects, and confidence in those estimates.


Canadian Medical Association Journal | 2004

Tips for learners of evidence-based medicine: 1. Relative risk reduction, absolute risk reduction and number needed to treat

Alexandra Barratt; Peter C. Wyer; Rose Hatala; Thomas McGinn; Antonio L. Dans; Sheri A. Keitz; Virginia A. Moyer; Gordon Guyatt for

Physicians, patients and policy-makers are influenced not only by the results of studies but also by how authors present the results.[1][1],[2][2],[3][3],[4][4] Depending on which measures of effect authors choose, the impact of an intervention may appear very large or quite small, even though the


Academic Medicine | 2014

Reconsidering fidelity in simulation-based training.

Stanley J. Hamstra; Ryan Brydges; Rose Hatala; Benjamin Zendejas; David A. Cook

In simulation-based health professions education, the concept of simulator fidelity is usually understood as the degree to which a simulator looks, feels, and acts like a human patient. Although this can be a useful guide in designing simulators, this definition emphasizes technological advances and physical resemblance over principles of educational effectiveness. In fact, several empirical studies have shown that the degree of fidelity appears to be independent of educational effectiveness. The authors confronted these issues while conducting a recent systematic review of simulation-based health professions education, and in this Perspective they use their experience in conducting that review to examine key concepts and assumptions surrounding the topic of fidelity in simulation.Several concepts typically associated with fidelity are more useful in explaining educational effectiveness, such as transfer of learning, learner engagement, and suspension of disbelief. Given that these concepts more directly influence properties of the learning experience, the authors make the following recommendations: (1) abandon the term fidelity in simulation-based health professions education and replace it with terms reflecting the underlying primary concepts of physical resemblance and functional task alignment; (2) make a shift away from the current emphasis on physical resemblance to a focus on functional correspondence between the simulator and the applied context; and (3) focus on methods to enhance educational effectiveness using principles of transfer of learning, learner engagement, and suspension of disbelief. These recommendations clarify underlying concepts for researchers in simulation-based health professions education and will help advance this burgeoning field.


Simulation in healthcare : journal of the Society for Simulation in Healthcare | 2012

Comparative effectiveness of technology-enhanced simulation versus other instructional methods: a systematic review and meta-analysis.

David A. Cook; Ryan Brydges; Stanley J. Hamstra; Benjamin Zendejas; Jason H. Szostek; Amy T. Wang; Patricia J. Erwin; Rose Hatala

Abstract To determine the comparative effectiveness of technology-enhanced simulation, we summarized the results of studies comparing technology-enhanced simulation training with nonsimulation instruction for health professions learners. We systematically searched databases including MEDLINE, Embase, and Scopus through May 2011 for relevant articles. Working in duplicate, we abstracted information on instructional design, outcomes, and study quality. From 10,903 candidate articles, we identified 92 eligible studies. In random-effects meta-analysis, pooled effect sizes (positive numbers favoring simulation) were as follows: satisfaction outcomes, 0.59 (95% confidence interval, 0.36–0.81; n = 20 studies); knowledge, 0.30 (0.16–0.43; n = 42); time measure of skills, 0.33 (0.00–0.66; n = 14); process measure of skills, 0.38 (0.24–0.52; n = 51); product measure of skills, 0.66 (0.30–1.02; n = 11); time measure of behavior, 0.56 (−0.07 to 1.18; n = 7); process measure of behavior, 0.77 (−0.13 to 1.66; n = 11); and patient effects, 0.36 (−0.06 to 0.78; n = 9). For 5 studies reporting comparative costs, simulation was more expensive and more effective. In summary, in comparison with other instruction, technology-enhanced simulation is associated with small to moderate positive effects.


Academic Medicine | 2013

Mastery learning for health professionals using technology-enhanced simulation: a systematic review and meta-analysis.

David A. Cook; Ryan Brydges; Benjamin Zendejas; Stanley J. Hamstra; Rose Hatala

Purpose Competency-based education requires individualization of instruction. Mastery learning, an instructional approach requiring learners to achieve a defined proficiency before proceeding to the next instructional objective, offers one approach to individualization. The authors sought to summarize the quantitative outcomes of mastery learning simulation-based medical education (SBME) in comparison with no intervention and nonmastery instruction, and to determine what features of mastery SBME make it effective. Method The authors searched MEDLINE, EMBASE, CINAHL, ERIC, PsycINFO, Scopus, key journals, and previous review bibliographies through May 2011. They included original research in any language evaluating mastery SBME, in comparison with any intervention or no intervention, for practicing and student physicians, nurses, and other health professionals. Working in duplicate, they abstracted information on trainees, instructional design (interactivity, feedback, repetitions, and learning time), study design, and outcomes. Results They identified 82 studies evaluating mastery SBME. In comparison with no intervention, mastery SBME was associated with large effects on skills (41 studies; effect size [ES] 1.29 [95% confidence interval, 1.08–1.50]) and moderate effects on patient outcomes (11 studies; ES 0.73 [95% CI, 0.36–1.10]). In comparison with nonmastery SBME instruction, mastery learning was associated with large benefit in skills (3 studies; effect size 1.17 [95% CI, 0.29–2.05]) but required more time. Pretraining and additional practice improved outcomes but, again, took longer. Studies exploring enhanced feedback and self-regulated learning in the mastery model showed mixed results. Conclusions Limited evidence suggests that mastery learning SBME is superior to nonmastery instruction but takes more time.


Canadian Medical Association Journal | 2005

Tips for learners of evidence-based medicine: 4. Assessing heterogeneity of primary studies in systematic reviews and whether to combine their results

Rose Hatala; Sheri A. Keitz; Peter C. Wyer; Gordon H. Guyatt

Clinicians wishing to quickly answer a clinical question may seek a systematic review, rather than searching for primary articles. Such a review is also called a meta-analysis when the investigators have used statistical techniques to combine results across studies. Databases useful for this purpose


Academic Medicine | 2013

Technology-enhanced simulation to assess health professionals: a systematic review of validity evidence, research methods, and reporting quality.

David A. Cook; Ryan Brydges; Benjamin Zendejas; Stanley J. Hamstra; Rose Hatala

Purpose To summarize the tool characteristics, sources of validity evidence, methodological quality, and reporting quality for studies of technology-enhanced simulation-based assessments for health professions learners. Method The authors conducted a systematic review, searching MEDLINE, EMBASE, CINAHL, ERIC, PsychINFO, Scopus, key journals, and previous reviews through May 2011. They selected original research in any language evaluating simulation-based assessment of practicing and student physicians, nurses, and other health professionals. Reviewers working in duplicate evaluated validity evidence using Messick’s five-source framework; methodological quality using the Medical Education Research Study Quality Instrument and the revised Quality Assessment of Diagnostic Accuracy Studies; and reporting quality using the Standards for Reporting Diagnostic Accuracy and Guidelines for Reporting Reliability and Agreement Studies. Results Of 417 studies, 350 (84%) involved physicians at some stage in training. Most focused on procedural skills, including minimally invasive surgery (N = 142), open surgery (81), and endoscopy (67). Common elements of validity evidence included relations with trainee experience (N = 306), content (142), relations with other measures (128), and interrater reliability (124). Of the 217 studies reporting more than one element of evidence, most were judged as having high or unclear risk of bias due to selective sampling (N = 192) or test procedures (132). Only 64% proposed a plan for interpreting the evidence to be presented (validity argument). Conclusions Validity evidence for simulation-based assessments is sparse and is concentrated within specific specialties, tools, and sources of validity evidence. The methodological and reporting quality of assessment studies leaves much room for improvement.


Medical Education | 2015

A systematic review of validity evidence for checklists versus global rating scales in simulation-based assessment

Jonathan S. Ilgen; Irene W. Y. Ma; Rose Hatala; David A. Cook

The relative advantages and disadvantages of checklists and global rating scales (GRSs) have long been debated. To compare the merits of these scale types, we conducted a systematic review of the validity evidence for checklists and GRSs in the context of simulation‐based assessment of health professionals.

Collaboration


Dive into the Rose Hatala's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Barry O. Kassen

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge