Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jonathan S. Ilgen is active.

Publication


Featured researches published by Jonathan S. Ilgen.


Academic Emergency Medicine | 2013

Technology-enhanced Simulation in Emergency Medicine: A Systematic Review and Meta-Analysis

Jonathan S. Ilgen; Jonathan Sherbino; David A. Cook

OBJECTIVES Technology-enhanced simulation is used frequently in emergency medicine (EM) training programs. Evidence for its effectiveness, however, remains unclear. The objective of this study was to evaluate the effectiveness of technology-enhanced simulation for training in EM and identify instructional design features associated with improved outcomes by conducting a systematic review. METHODS The authors systematically searched MEDLINE, EMBASE, CINAHL, ERIC, PsychINFO, Scopus, key journals, and previous review bibliographies through May 2011. Original research articles in any language were selected if they compared simulation to no intervention or another educational activity for the purposes of training EM health professionals (including student and practicing physicians, midlevel providers, nurses, and prehospital providers). Reviewers evaluated study quality and abstracted information on learners, instructional design (curricular integration, feedback, repetitive practice, mastery learning), and outcomes. RESULTS From a collection of 10,903 articles, 85 eligible studies enrolling 6,099 EM learners were identified. Of these, 56 studies compared simulation to no intervention, 12 compared simulation with another form of instruction, and 19 compared two forms of simulation. Effect sizes were pooled using a random-effects model. Heterogeneity among these studies was large (I(2) ≥ 50%). Among studies comparing simulation to no intervention, pooled effect sizes were large (range = 1.13 to 1.48) for knowledge, time, and skills and small to moderate for behaviors with patients (0.62) and patient effects (0.43; all p < 0.02 except patient effects p = 0.12). Among comparisons between simulation and other forms of instruction, the pooled effect sizes were small (≤ 0.33) for knowledge, time, and process skills (all p > 0.1). Qualitative comparisons of different simulation curricula are limited, although feedback, mastery learning, and higher fidelity were associated with improved learning outcomes. CONCLUSIONS Technology-enhanced simulation for EM learners is associated with moderate or large favorable effects in comparison with no intervention and generally small and nonsignificant benefits in comparison with other instruction. Future research should investigate the features that lead to effective simulation-based instructional design.


Medical Education | 2015

A systematic review of validity evidence for checklists versus global rating scales in simulation-based assessment

Jonathan S. Ilgen; Irene W. Y. Ma; Rose Hatala; David A. Cook

The relative advantages and disadvantages of checklists and global rating scales (GRSs) have long been debated. To compare the merits of these scale types, we conducted a systematic review of the validity evidence for checklists and GRSs in the context of simulation‐based assessment of health professionals.


Academic Medicine | 2017

The Causes of Errors in Clinical Reasoning: Cognitive Biases, Knowledge Deficits, and Dual Process Thinking

Geoffrey R. Norman; Sandra Monteiro; Jonathan Sherbino; Jonathan S. Ilgen; Henk G. Schmidt; Sílvia Mamede

Contemporary theories of clinical reasoning espouse a dual processing model, which consists of a rapid, intuitive component (Type 1) and a slower, logical and analytical component (Type 2). Although the general consensus is that this dual processing model is a valid representation of clinical reasoning, the causes of diagnostic errors remain unclear. Cognitive theories about human memory propose that such errors may arise from both Type 1 and Type 2 reasoning. Errors in Type 1 reasoning may be a consequence of the associative nature of memory, which can lead to cognitive biases. However, the literature indicates that, with increasing expertise (and knowledge), the likelihood of errors decreases. Errors in Type 2 reasoning may result from the limited capacity of working memory, which constrains computational processes. In this article, the authors review the medical literature to answer two substantial questions that arise from this work: (1) To what extent do diagnostic errors originate in Type 1 (intuitive) processes versus in Type 2 (analytical) processes? (2) To what extent are errors a consequence of cognitive biases versus a consequence of knowledge deficits? The literature suggests that both Type 1 and Type 2 processes contribute to errors. Although it is possible to experimentally induce cognitive biases, particularly availability bias, the extent to which these biases actually contribute to diagnostic errors is not well established. Educational strategies directed at the recognition of biases are ineffective in reducing errors; conversely, strategies focused on the reorganization of knowledge to reduce errors have small but consistent benefits.


Academic Medicine | 2013

Comparing diagnostic performance and the utility of clinical vignette-based assessment under testing conditions designed to encourage either automatic or analytic thought.

Jonathan S. Ilgen; Judith L. Bowen; Lucas A. McIntyre; Kenny V. Banh; David Barnes; Wendy C. Coates; Jeffrey Druck; Megan L. Fix; Diane Rimple; Lalena M. Yarris; Kevin W. Eva

Purpose Although decades of research have yielded considerable insight into physicians’ clinical reasoning processes, assessing these processes remains challenging; thus, the authors sought to compare diagnostic performance and the utility of clinical vignette-based assessment under testing conditions designed to encourage either automatic or analytic thought. Method This 2011–2012 multicenter randomized study of 393 clinicians (medical students, postgraduate trainees, and faculty) measured diagnostic accuracy on clinical vignettes under two conditions: one encouraged participants to give their first impression (FI), and the other led participants through a directed search (DS) for the correct diagnosis. The authors compared accuracy, feasibility, reliability, and relation to United States Medical Licensing Exam (USMLE) scores under each condition. Results A 2 (instructional condition) × 2 (vignette complexity) × 3 (experience level) analysis of variance revealed no difference in accuracy as a function of instructional condition (F[1,379] = 2.44, P = .12), but demonstrated the expected main effects of vignette complexity (F[1,379] = 965.2, P < .001) and experience (F[2,379] = 39.6, P < .001). Pearson correlations revealed greater associations between assessment scores and USMLE performance in the FI condition than in the DS condition (P < .001). Spearman–Brown calculations consistently indicated that alpha ≥ 0.75 could be achieved more efficiently under the FI condition relative to the DS condition. Conclusions Instructions to trust one’s first impres-sions result in similar performance when compared with instructions to consider clinical information in a systematic fashion, but have greater utility when used for the purposes of assessment.


BMJ Quality & Safety | 2017

Is bias in the eye of the beholder? A vignette study to assess recognition of cognitive biases in clinical case workups

Laura Zwaan; Sandra Monteiro; Jonathan Sherbino; Jonathan S. Ilgen; Betty Howey; Geoffrey R. Norman

Background Many authors have implicated cognitive biases as a primary cause of diagnostic error. If this is so, then physicians already familiar with common cognitive biases should consistently identify biases present in a clinical workup. The aim of this paper is to determine whether physicians agree on the presence or absence of particular biases in a clinical case workup and how case outcome knowledge affects bias identification. Methods We conducted a web survey of 37 physicians. Each participant read eight cases and listed which biases were present from a list provided. In half the cases the outcome implied a correct diagnosis; in the other half, it implied an incorrect diagnosis. We compared the number of biases identified when the outcome implied a correct or incorrect primary diagnosis. Additionally, the agreement among participants about presence or absence of specific biases was assessed. Results When the case outcome implied a correct diagnosis, an average of 1.75 cognitive biases were reported; when incorrect, 3.45 biases (F=71.3, p<0.00001). Individual biases were reported from 73% to 125% more often when an incorrect diagnosis was implied. There was no agreement on presence or absence of individual biases, with κ ranging from 0.000 to 0.044. Interpretation Individual physicians are unable to agree on the presence or absence of individual cognitive biases. Their judgements are heavily influenced by hindsight bias; when the outcome implies a diagnostic error, twice as many biases are identified. The results present challenges for current error reduction strategies based on identification of cognitive biases.


Academic Medicine | 2014

Leadership training in health care action teams: a systematic review.

Elizabeth D. Rosenman; Jamie Shandro; Jonathan S. Ilgen; Amy Harper; Rosemarie Fernandez

Purpose To identify and describe the design, implementation, and evidence of effectiveness of leadership training interventions for health care action (HCA) teams, defined as interdisciplinary teams whose members coordinate their actions in time-pressured, unstable situations. Method The authors conducted a systematic search of the PubMed/MEDLINE, CINAHL, ERIC, EMBASE, PsycINFO, and Web of Science databases, key journals, and review articles published through March 2012. They identified peer-reviewed English-language articles describing leadership training interventions targeting HCA teams, at all levels of training and across all health care professions. Reviewers, working in duplicate, abstracted training characteristics and outcome data. Methodological quality was evaluated using the Medical Education Research Study Quality Instrument (MERSQI). Results Of the 52 included studies, 5 (10%) focused primarily on leadership training, whereas the remainder included leadership training as part of a larger teamwork curriculum. Few studies reported using a team leadership model (2; 4%) or a theoretical framework (9; 17%) to support their curricular design. Only 15 studies (29%) specified the leadership behaviors targeted by training. Forty-five studies (87%) reported an assessment component; of those, 31 (69%) provided objective outcome measures including assessment of knowledge or skills (21; 47%), behavior change (8; 18%), and patient- or system-level metrics (8; 18%). The mean MERSQI score was 11.4 (SD 2.9). Conclusions Leadership training targeting HCA teams has become more prevalent. Determining best practices in leadership training is confounded by variability in leadership definitions, absence of supporting frameworks, and a paucity of robust assessments.


Academic Medicine | 2015

Disrupting diagnostic reasoning: do interruptions, instructions, and experience affect the diagnostic accuracy and response time of residents and emergency physicians?

Sandra Monteiro; Jonathan Sherbino; Jonathan S. Ilgen; Kelly L. Dore; Timothy J. Wood; Meredith Young; Glen Bandiera; Danielle Blouin; Wolfgang Gaissmaier; Geoffrey R. Norman; Elizabeth Howey

Purpose Others have suggested that increased time pressure, sometimes caused by interruptions, may result in increased diagnostic errors. The authors previously found, however, that increased time pressure alone does not result in increased errors, but they did not test the effect of interruptions. It is unclear whether experience modulates the combined effects of time pressure and interruptions. This study investigated whether increased time pressure, interruptions, and experience level affect diagnostic accuracy and response time. Method In October 2012, 152 residents were recruited at five Medical Council of Canada Qualifying Examination Part II test sites. Forty-six emergency physicians were recruited from one Canadian and one U.S. academic health center. Participants diagnosed 20 written general medicine cases. They were randomly assigned to receive fast (time pressure) or slow condition instructions. Visual and auditory case interruptions were manipulated as a within-subject factor. Results Diagnostic accuracy was not affected by interruptions or time pressure but was related to experience level: Emergency physicians were more accurate (71%) than residents (43%) (F = 234.0, P < .0001) and responded more quickly (54 seconds) than residents (65 seconds) (F = 9.0, P < .005). Response time was shorter for participants in the fast condition (55 seconds) than in the slow condition (73 seconds) (F = 22.2, P < .0001). Interruptions added about 8 seconds to response time. Conclusions Experienced emergency physicians were both faster and more accurate than residents. Instructions to proceed quickly and interruptions had a small effect on response time but no effect on accuracy.


American Journal of Surgery | 2016

A comparison of Google Glass and traditional video vantage points for bedside procedural skill assessment.

Heather L. Evans; Dylan J. O'Shea; Amy E. Morris; Kari A. Keys; Andrew S. Wright; Douglas C. Schaad; Jonathan S. Ilgen

BACKGROUND This pilot study assessed the feasibility of using first person (1P) video recording with Google Glass (GG) to assess procedural skills, as compared with traditional third person (3P) video. We hypothesized that raters reviewing 1P videos would visualize more procedural steps with greater inter-rater reliability than 3P rating vantages. METHODS Seven subjects performed simulated internal jugular catheter insertions. Procedures were recorded by both Google Glass and an observers head-mounted camera. Videos were assessed by 3 expert raters using a task-specific checklist (CL) and both an additive- and summative-global rating scale (GRS). Mean scores were compared by t-tests. Inter-rater reliabilities were calculated using intraclass correlation coefficients. RESULTS The 1P vantage was associated with a significantly higher mean CL score than the 3P vantage (7.9 vs 6.9, P = .02). Mean GRS scores were not significantly different. Mean inter-rater reliabilities for the CL, additive-GRS, and summative-GRS were similar between vantages. CONCLUSIONS 1P vantage recordings may improve visualization of tasks for behaviorally anchored instruments (eg, CLs), whereas maintaining similar global ratings and inter-rater reliability when compared with conventional 3P vantage recordings.


Journal of General Internal Medicine | 2016

What’s in a Label? Is Diagnosis the Start or the End of Clinical Reasoning?

Jonathan S. Ilgen; Kevin W. Eva; Glenn Regehr

ABSTRACTDiagnostic reasoning has received substantial attention in the literature, yet what we mean by “diagnosis” may vary. Diagnosis can align with assignment of a “label,” where a constellation of signs, symptoms, and test results is unified into a solution at a single point in time. This “diagnostic labeling” conceptualization is embodied in our case-based learning curricula, published case reports, and research studies, all of which treat diagnostic accuracy as the primary outcome. However, this conceptualization may oversimplify the richly iterative and evolutionary nature of clinical reasoning in many settings. Diagnosis can also represent a process of guiding one’s thoughts by “making meaning” from data that are intrinsically dynamic, experienced idiosyncratically, negotiated among team members, and rich with opportunities for exploration. Thus, there are two complementary constructions of diagnosis: 1) the correct solution resulting from a diagnostic reasoning process, and 2) a dynamic aid to an ongoing clinical reasoning process. This article discusses the importance of recognizing these two conceptualizations of “diagnosis,” outlines the unintended consequences of emphasizing diagnostic labeling as the primary goal of clinical reasoning, and suggests how framing diagnosis as an ongoing process of meaning-making might change how we think about teaching and assessing clinical reasoning.


Academic Medicine | 2015

A Systematic Review of Tools Used to Assess Team Leadership in Health Care Action Teams.

Elizabeth D. Rosenman; Jonathan S. Ilgen; Jamie Shandro; Amy Harper; Rosemarie Fernandez

Purpose To summarize the characteristics of tools used to assess leadership in health care action (HCA) teams. HCA teams are interdisciplinary teams performing complex, critical tasks under high-pressure conditions. Method The authors conducted a systematic review of the PubMed/MEDLINE, CINAHL, ERIC, EMBASE, PsycINFO, and Web of Science databases, key journals, and review articles published through March 2012 for English-language articles that applied leadership assessment tools to HCA teams in all specialties. Pairs of reviewers assessed identified articles for inclusion and exclusion criteria and abstracted data on study characteristics, tool characteristics, and validity evidence. Results Of the 9,913 abstracts screened, 83 studies were included. They described 61 team leadership assessment tools. Forty-nine tools (80%) provided behaviors, skills, or characteristics to define leadership. Forty-four tools (72%) assessed leadership as one component of a larger assessment, 13 tools (21%) identified leadership as the primary focus of the assessment, and 4 (7%) assessed leadership style. Fifty-three studies (64%) assessed leadership at the team level; 29 (35%) did so at the individual level. Assessments of simulated (n = 55) and live (n = 30) patient care events were performed. Validity evidence included content validity (n = 75), internal structure (n = 61), relationship to other variables (n = 44), and response process (n = 15). Conclusions Leadership assessment tools applied to HCA teams are heterogeneous in content and application. Comparisons between tools are limited by study variability. A systematic approach to team leadership tool development, evaluation, and implementation will strengthen understanding of this important competency.

Collaboration


Dive into the Jonathan S. Ilgen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kevin W. Eva

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Amy Harper

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge