Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jonathan Sherbino is active.

Publication


Featured researches published by Jonathan Sherbino.


Academic Emergency Medicine | 2013

Technology-enhanced Simulation in Emergency Medicine: A Systematic Review and Meta-Analysis

Jonathan S. Ilgen; Jonathan Sherbino; David A. Cook

OBJECTIVES Technology-enhanced simulation is used frequently in emergency medicine (EM) training programs. Evidence for its effectiveness, however, remains unclear. The objective of this study was to evaluate the effectiveness of technology-enhanced simulation for training in EM and identify instructional design features associated with improved outcomes by conducting a systematic review. METHODS The authors systematically searched MEDLINE, EMBASE, CINAHL, ERIC, PsychINFO, Scopus, key journals, and previous review bibliographies through May 2011. Original research articles in any language were selected if they compared simulation to no intervention or another educational activity for the purposes of training EM health professionals (including student and practicing physicians, midlevel providers, nurses, and prehospital providers). Reviewers evaluated study quality and abstracted information on learners, instructional design (curricular integration, feedback, repetitive practice, mastery learning), and outcomes. RESULTS From a collection of 10,903 articles, 85 eligible studies enrolling 6,099 EM learners were identified. Of these, 56 studies compared simulation to no intervention, 12 compared simulation with another form of instruction, and 19 compared two forms of simulation. Effect sizes were pooled using a random-effects model. Heterogeneity among these studies was large (I(2) ≥ 50%). Among studies comparing simulation to no intervention, pooled effect sizes were large (range = 1.13 to 1.48) for knowledge, time, and skills and small to moderate for behaviors with patients (0.62) and patient effects (0.43; all p < 0.02 except patient effects p = 0.12). Among comparisons between simulation and other forms of instruction, the pooled effect sizes were small (≤ 0.33) for knowledge, time, and process skills (all p > 0.1). Qualitative comparisons of different simulation curricula are limited, although feedback, mastery learning, and higher fidelity were associated with improved learning outcomes. CONCLUSIONS Technology-enhanced simulation for EM learners is associated with moderate or large favorable effects in comparison with no intervention and generally small and nonsignificant benefits in comparison with other instruction. Future research should investigate the features that lead to effective simulation-based instructional design.


Medical Teacher | 2010

Competency-based medical education: theory to practice

Jason R. Frank; Linda Snell; Olle ten Cate; Eric S. Holmboe; Carol Carraccio; Susan R. Swing; Peter Harris; Nicholas Glasgow; Craig Campbell; Deepak Dath; Ronald M. Harden; William Iobst; Donlin M. Long; Rani Mungroo; Denyse Richardson; Jonathan Sherbino; Ivan Silver; Sarah Taber; Martin Talbot; Kenneth A. Harris

Although competency-based medical education (CBME) has attracted renewed interest in recent years among educators and policy-makers in the health care professions, there is little agreement on many aspects of this paradigm. We convened a unique partnership – the International CBME Collaborators – to examine conceptual issues and current debates in CBME. We engaged in a multi-stage group process and held a consensus conference with the aim of reviewing the scholarly literature of competency-based medical education, identifying controversies in need of clarification, proposing definitions and concepts that could be useful to educators across many jurisdictions, and exploring future directions for this approach to preparing health professionals. In this paper, we describe the evolution of CBME from the outcomes movement in the 20th century to a renewed approach that, focused on accountability and curricular outcomes and organized around competencies, promotes greater learner-centredness and de-emphasizes time-based curricular design. In this paradigm, competence and related terms are redefined to emphasize their multi-dimensional, dynamic, developmental, and contextual nature. CBME therefore has significant implications for the planning of medical curricula and will have an important impact in reshaping the enterprise of medical education. We elaborate on this emerging CBME approach and its related concepts, and invite medical educators everywhere to enter into further dialogue about the promise and the potential perils of competency-based medical curricula for the 21st century.


Medical Teacher | 2010

The role of assessment in competency-based medical education

Eric S. Holmboe; Jonathan Sherbino; Donlin M. Long; Susan R. Swing; Jason R. Frank

Competency-based medical education (CBME), by definition, necessitates a robust and multifaceted assessment system. Assessment and the judgments or evaluations that arise from it are important at the level of the trainee, the program, and the public. When designing an assessment system for CBME, medical education leaders must attend to the context of the multiple settings where clinical training occurs. CBME further requires assessment processes that are more continuous and frequent, criterion-based, developmental, work-based where possible, use assessment methods and tools that meet minimum requirements for quality, use both quantitative and qualitative measures and methods, and involve the wisdom of group process in making judgments about trainee progress. Like all changes in medical education, CBME is a work in progress. Given the importance of assessment and evaluation for CBME, the medical education community will need more collaborative research to address several major challenges in assessment, including “best practices” in the context of systems and institutional culture and how to best to train faculty to be better evaluators. Finally, we must remember that expertise, not competence, is the ultimate goal. CBME does not end with graduation from a training program, but should represent a career that includes ongoing assessment.


Medical Teacher | 2010

Competency-based medical education in postgraduate medical education

William Iobst; Jonathan Sherbino; Olle ten Cate; Denyse Richardson; Deepak Dath; Susan R. Swing; Peter Harris; Rani Mungroo; Eric S. Holmboe; Jason R. Frank

With the introduction of Tomorrows Doctors in 1993, medical education began the transition from a time- and process-based system to a competency-based training framework. Implementing competency-based training in postgraduate medical education poses many challenges but ultimately requires a demonstration that the learner is truly competent to progress in training or to the next phase of a professional career. Making this transition requires change at virtually all levels of postgraduate training. Key components of this change include the development of valid and reliable assessment tools such as work-based assessment using direct observation, frequent formative feedback, and learner self-directed assessment; active involvement of the learner in the educational process; and intensive faculty development that addresses curricular design and the assessment of competency.


Medical Education | 2014

Debriefing for technology-enhanced simulation: A systematic review and meta-analysis

Adam Cheng; Walter Eppich; Vincent Grant; Jonathan Sherbino; Benjamin Zendejas; David A. Cook

Debriefing is a common feature of technology‐enhanced simulation (TES) education. However, evidence for its effectiveness remains unclear. We sought to characterise how debriefing is reported in the TES literature, identify debriefing features that are associated with improved outcomes, and evaluate the effectiveness of debriefing when combined with TES.


Academic Medicine | 2014

The etiology of diagnostic errors: a controlled trial of system 1 versus system 2 reasoning.

Geoffrey R. Norman; Jonathan Sherbino; Kelly L. Dore; Timothy J. Wood; Meredith Young; Wolfgang Gaissmaier; Sharyn Kreuger; Sandra Monteiro

Purpose Diagnostic errors are thought to arise from cognitive biases associated with System 1 reasoning, which is rapid and unconscious. The primary hypothesis of this study was that the instruction to be slow and thorough will have no advantage in diagnostic accuracy over the instruction to proceed rapidly. Method Participants were second-year residents who volunteered after they had taken the Medical Council of Canada (MCC) Qualifying Examination Part II. Participants were tested at three Canadian medical schools (McMaster, Ottawa, and McGill) in 2010 (n = 96) and 2011 (n = 108). The intervention consisted of 20 computer-based internal medicine cases, with instructions either (1) to be as quick as possible but not make mistakes (the Speed cohort, 2010), or (2) to be careful, thorough, and reflective (the Reflect cohort, 2011). The authors examined accuracy scores on the 20 cases, time taken to diagnose cases, and MCC examination performance. Results Overall accuracy in the Speed condition was 44.5%, and in the Reflect condition was 45.0%; this was not significant. The Speed cohort took an average of 69 seconds per case versus 89 seconds for the Reflect cohort (P < .001). In both cohorts, cases diagnosed incorrectly took an average of 17 seconds longer than cases diagnosed correctly. Diagnostic accuracy was moderately correlated with performance on both written and problem-solving components of the MCC licensure examination and inversely correlated with time. Conclusions The study demonstrates that simply encouraging slowing down and increasing attention to analytical thinking is insufficient to increase diagnostic accuracy.


Medical Teacher | 2010

Competency-based continuing professional development

Craig Campbell; Ivan Silver; Jonathan Sherbino; Olle ten Cate; Eric S. Holmboe

Competence is traditionally viewed as the attainment of a static set of attributes rather than a dynamic process in which physicians continuously use their practice experiences to “progress in competence” toward the attainment of expertise. A competency-based continuing professional development (CPD) model is premised on a set of learning competencies that include the ability to (a) use practice information to identify learning priorities and to develop and monitor CPD plans; (b) access information sources for innovations in development and new evidence that may potentially be integrated into practice; (c) establish a personal knowledge management system to store and retrieve evidence and to select and manage learning projects; (d) construct questions, search for evidence, and record and track conclusions for practice; and (e) use tools and processes to measure competence and performance and develop action plans to enhance practice. Competency-based CPD emphasizes self-directed learning processes and promotes the role of assessment as a professional expectation and obligation. Various approaches to defining general competencies for practice require the creation of specific performance metrics to be meaningful and relevant to the lifelong learning strategies of physicians. This paper describes the assumptions, advantages, and challenges of establishing a CPD system focused on competencies that improve physician performance and the quality and safety of patient care. Implications for competency-based CPD are discussed from an individual and organizational perspective, and a model to bridge the transition from residency to practice is explored.


Academic Medicine | 2012

The relationship between response time and diagnostic accuracy

Jonathan Sherbino; Kelly L. Dore; Timothy J. Wood; Meredith Young; Wolfgang Gaissmaier; Sharyn Kreuger; Geoffrey R. Norman

Purpose Psychologists theorize that cognitive reasoning involves two distinct processes: System 1, which is rapid, unconscious, and contextual, and System 2, which is slow, logical, and rational. According to the literature, diagnostic errors arise primarily from System 1 reasoning, and therefore they are associated with rapid diagnosis. This study tested whether accuracy is associated with shorter or longer times to diagnosis. Method Immediately after the 2010 administration of the Medical Council of Canada Qualifying Examination (MCCQE) Part II at three test centers, the authors recruited participants, who read and diagnosed a series of 25 written cases of varying difficulty. The authors computed accuracy and response time (RT) for each case. Results Seventy-five Canadian medical graduates (of 95 potential participants) participated. The overall correlation between RT and accuracy was −0.54; accuracy, then, was strongly associated with more rapid RT. This negative relationship with RT held for 23 of 25 cases individually and overall when the authors controlled for participants’ knowledge, as judged by their MCCQE Part I and II scores. For 19 of 25 cases, accuracy on each case was positively related to experience with that specific diagnosis. A participant’s performance on the test overall was significantly correlated with his or her performance on both the MCCQE Part I and II. Conclusions These results are inconsistent with clinical reasoning models that presume that System 1 reasoning is necessarily more error prone than System 2. These results suggest instead that rapid diagnosis is accurate and relates to other measures of competence.


Canadian Journal of Emergency Medicine | 2015

The use of free online educational resources by Canadian emergency medicine residents and program directors

Eve Purdy; Joseph Bednarczyk; David Migneault; Jonathan Sherbino

UNLABELLED Introduction Online educational resources (OERs) are increasingly available for emergency medicine (EM) education. This study describes and compares the use of free OERs by the Royal College of Physicians and Surgeons of Canada (RCPSC) EM residents and program directors (PDs) and investigates the relationship between the use of OERs and peer-reviewed literature. METHODS A bilingual, online survey was distributed to RCPSC-EM residents and PDs using a modified Dillman method. The chi-square test and Fishers exact test were used to compare the responses of residents and PDs. RESULTS The survey was completed by 214/350 (61%) residents and 11/14 (79%) PDs. Free OERs were used by residents most frequently for general EM education (99.5%), procedural skills training (96%), and learning to interpret diagnostic tests (92%). OER modalities used most frequently included wikis (95%), file-sharing websites (95%), e-textbooks (94%), and podcasts (91%). Residents used wikis, podcasts, vodcasts, and file-sharing websites significantly more frequently than PDs. Relative to PDs, residents found entertainment value to be more important for choosing OERs (p<0.01). Some residents (23%) did not feel that literature references were important, whereas all PDs did. Both groups reported that OERs increased the amount of peer-reviewed literature (75% and 60%, respectively) that they read. CONCLUSIONS EM residents make extensive use of OERs and differ from their PDs in the importance that they place on their entertainment value and incorporation of peer-reviewed references. OERs may increase the use of peer-reviewed literature in both groups. Given the prevalence of OER use for core educational goals among RCPSC-EM trainees, future efforts to facilitate critical appraisal and appropriate resource selection are warranted.


Academic Medicine | 2017

The Causes of Errors in Clinical Reasoning: Cognitive Biases, Knowledge Deficits, and Dual Process Thinking

Geoffrey R. Norman; Sandra Monteiro; Jonathan Sherbino; Jonathan S. Ilgen; Henk G. Schmidt; Sílvia Mamede

Contemporary theories of clinical reasoning espouse a dual processing model, which consists of a rapid, intuitive component (Type 1) and a slower, logical and analytical component (Type 2). Although the general consensus is that this dual processing model is a valid representation of clinical reasoning, the causes of diagnostic errors remain unclear. Cognitive theories about human memory propose that such errors may arise from both Type 1 and Type 2 reasoning. Errors in Type 1 reasoning may be a consequence of the associative nature of memory, which can lead to cognitive biases. However, the literature indicates that, with increasing expertise (and knowledge), the likelihood of errors decreases. Errors in Type 2 reasoning may result from the limited capacity of working memory, which constrains computational processes. In this article, the authors review the medical literature to answer two substantial questions that arise from this work: (1) To what extent do diagnostic errors originate in Type 1 (intuitive) processes versus in Type 2 (analytical) processes? (2) To what extent are errors a consequence of cognitive biases versus a consequence of knowledge deficits? The literature suggests that both Type 1 and Type 2 processes contribute to errors. Although it is possible to experimentally induce cognitive biases, particularly availability bias, the extent to which these biases actually contribute to diagnostic errors is not well established. Educational strategies directed at the recognition of biases are ineffective in reducing errors; conversely, strategies focused on the reorganization of knowledge to reduce errors have small but consistent benefits.

Collaboration


Dive into the Jonathan Sherbino's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jinhui Ma

Children's Hospital of Eastern Ontario

View shared research outputs
Researchain Logo
Decentralizing Knowledge