Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kelly L. Dore is active.

Publication


Featured researches published by Kelly L. Dore.


Medical Education | 2012

The minimal relationship between simulation fidelity and transfer of learning

Geoff Norman; Kelly L. Dore; Lawrence E. M. Grierson

Medical Education 2012


Academic Medicine | 2014

The etiology of diagnostic errors: a controlled trial of system 1 versus system 2 reasoning.

Geoffrey R. Norman; Jonathan Sherbino; Kelly L. Dore; Timothy J. Wood; Meredith Young; Wolfgang Gaissmaier; Sharyn Kreuger; Sandra Monteiro

Purpose Diagnostic errors are thought to arise from cognitive biases associated with System 1 reasoning, which is rapid and unconscious. The primary hypothesis of this study was that the instruction to be slow and thorough will have no advantage in diagnostic accuracy over the instruction to proceed rapidly. Method Participants were second-year residents who volunteered after they had taken the Medical Council of Canada (MCC) Qualifying Examination Part II. Participants were tested at three Canadian medical schools (McMaster, Ottawa, and McGill) in 2010 (n = 96) and 2011 (n = 108). The intervention consisted of 20 computer-based internal medicine cases, with instructions either (1) to be as quick as possible but not make mistakes (the Speed cohort, 2010), or (2) to be careful, thorough, and reflective (the Reflect cohort, 2011). The authors examined accuracy scores on the 20 cases, time taken to diagnose cases, and MCC examination performance. Results Overall accuracy in the Speed condition was 44.5%, and in the Reflect condition was 45.0%; this was not significant. The Speed cohort took an average of 69 seconds per case versus 89 seconds for the Reflect cohort (P < .001). In both cohorts, cases diagnosed incorrectly took an average of 17 seconds longer than cases diagnosed correctly. Diagnostic accuracy was moderately correlated with performance on both written and problem-solving components of the MCC licensure examination and inversely correlated with time. Conclusions The study demonstrates that simply encouraging slowing down and increasing attention to analytical thinking is insufficient to increase diagnostic accuracy.


Academic Medicine | 2012

The relationship between response time and diagnostic accuracy

Jonathan Sherbino; Kelly L. Dore; Timothy J. Wood; Meredith Young; Wolfgang Gaissmaier; Sharyn Kreuger; Geoffrey R. Norman

Purpose Psychologists theorize that cognitive reasoning involves two distinct processes: System 1, which is rapid, unconscious, and contextual, and System 2, which is slow, logical, and rational. According to the literature, diagnostic errors arise primarily from System 1 reasoning, and therefore they are associated with rapid diagnosis. This study tested whether accuracy is associated with shorter or longer times to diagnosis. Method Immediately after the 2010 administration of the Medical Council of Canada Qualifying Examination (MCCQE) Part II at three test centers, the authors recruited participants, who read and diagnosed a series of 25 written cases of varying difficulty. The authors computed accuracy and response time (RT) for each case. Results Seventy-five Canadian medical graduates (of 95 potential participants) participated. The overall correlation between RT and accuracy was −0.54; accuracy, then, was strongly associated with more rapid RT. This negative relationship with RT held for 23 of 25 cases individually and overall when the authors controlled for participants’ knowledge, as judged by their MCCQE Part I and II scores. For 19 of 25 cases, accuracy on each case was positively related to experience with that specific diagnosis. A participant’s performance on the test overall was significantly correlated with his or her performance on both the MCCQE Part I and II. Conclusions These results are inconsistent with clinical reasoning models that presume that System 1 reasoning is necessarily more error prone than System 2. These results suggest instead that rapid diagnosis is accurate and relates to other measures of competence.


Teaching and Learning in Medicine | 2011

The Effectiveness of Cognitive Forcing Strategies to Decrease Diagnostic Error: An Exploratory Study

Jonathan Sherbino; Kelly L. Dore; Eric Siu; Geoffrey R. Norman

Background: Cognitive forcing strategies, a form of metacognition, have been advocated as a strategy to prevent diagnostic error. Increasingly, curricula are being implemented in medical training to address this error. Yet there is no experimental evidence that these curricula are effective. Description: This was an exploratory, prospective study using consecutive enrollment of 56 senior medical students during their emergency medicine rotation. Students received interactive, standardized cognitive forcing strategy training. Evaluation: Using a cross-over design to assess transfer between similar (to instructional cases) and novel diagnostic cases, students were evaluated on 6 test cases. Forty-seven students were immediately tested and 9 were tested 2 weeks later. Data were analyzed using descriptive statistics and a McNemar chi-square test. Conclusions: This is the first study to explore the impact of cognitive forcing strategy training on diagnostic error. Our preliminary findings suggest that application and retention is poor. Further large studies are required to determine if transfer across diagnostic formats occurs.


Academic Medicine | 2007

The Power of the Plural: Effect of Conceptual Analogies on Successful Transfer

Geoffrey R. Norman; Kelly L. Dore; Jennifer Krebs; Allan J. Neville

Background Transfer, using a previously learned concept to solve a new, apparently different problem, is difficult. Students who know a concept will typically only be able to access it to solve new problems 10% to 30% of the time. However, one solution is to have students work through parallel, apparently different problems. Method Learning materials for three cardiology-related concepts—Laplace Law, Starling Law, and Right Heart Strain—were devised. One group read a physiological explanation; two other groups read a combination of physiological and mechanical explanations, either paired up or separate. The sample was students in an undergraduate health sciences program (n = 35) who did the study for course credit. Outcomes were measured by accuracy of explanation on a test of nine clinical cases, as rated by one clinician on a seven-point scale. Results Groups who read two explanations did significantly better on the test, with mean scores of 3.6/5 and 4.1/5 versus 1.8/5 for the single group. Effect sizes were 1.3 and 1.7, respectively, against the single-example group. Conclusions Active learning with multiple examples can have large effects on a student’s ability to apply concepts to solve new problems.


Academic Medicine | 2006

Medical school admissions: enhancing the reliability and validity of an autobiographical screening tool.

Kelly L. Dore; Mark D. Hanson; Harold I. Reiter; Melanie Blanchard; Karen Deeth; Kevin W. Eva

Background Most medical school applicants are screened out preinterview. Some cognitive scores available preinterview and some noncognitive scores available at interview demonstrate reasonable reliability and predictive validity. A reliable preinterview noncognitive measure would relax dependence upon screening based entirely on cognitive tendencies. Method In 2005, applicants interviewing at McMaster University’s Michael G. DeGroote School of Medicine completed an offsite, noninvigilated, Autobiographical Submission (ABS) preinterview and another onsite, invigilated, ABS at interview. Traditional and new ABS scoring methods were compared, with raters either evaluating all ABS questions for each candidate in turn (vertical scoring–traditional method) or evaluating all candidates for each question in turn (horizontal scoring–new method). Results The new scoring method revealed lower internal consistency and higher interrater reliability relative to the traditional method. More importantly, the new scoring method correlated better with the Multiple Mini-Interview (MMI) relative to the traditional method. Conclusions The new ABS scoring method revealed greater interrater reliability and predictive capacity, thus increasing its potential as a screen for noncognitive characteristics.


Academic Medicine | 2009

Extending the interview to all medical school candidates--Computer-Based Multiple Sample Evaluation of Noncognitive Skills (CMSENS).

Kelly L. Dore; Harold I. Reiter; Kevin W. Eva; Sharyn Krueger; Edward Scriven; Eric Siu; Shannon Hilsden; Jennifer Thomas; Geoffrey R. Norman

Background Most medical school candidates are excluded without benefit of noncognitive skills assessment. Is development of a noncognitive preinterview screening test that correlates with the well-validated Multiple Mini-Interview (MMI) possible? Method Study 1: 110 medical school candidates completed MMI and Computer-based Multiple Sample Evaluation of Noncognitive Skills (CMSENS)—eight 1-minute video-based scenarios and four self-descriptive questions, with short-answer-response format. Seventy-eight responses were audiotaped, 32 typewritten; all were scored by two independent raters. Study 2: 167 candidates completed CMSENS—eight videos, six self-descriptive questions, typewritten responses only, scored by two raters; 88 of 167 underwent the MMI. Results Results for overall test generalizability, interrater reliability, and correlation with MMI, respectively, were, for Study 1, audio-responders: 0.86, 0.82, 0.15; typewritten-responders: 0.72, 0.81, 0.51; and for Study 2, 0.83, 0.95, 0.46 (correlation with disattenuation was 0.60). Conclusions Strong psychometric properties, including MMI correlation, of CMSENS warrant investigation into future widespread implementation as a preinterview noncognitive screening test.


Academic Medicine | 2015

Disrupting diagnostic reasoning: do interruptions, instructions, and experience affect the diagnostic accuracy and response time of residents and emergency physicians?

Sandra Monteiro; Jonathan Sherbino; Jonathan S. Ilgen; Kelly L. Dore; Timothy J. Wood; Meredith Young; Glen Bandiera; Danielle Blouin; Wolfgang Gaissmaier; Geoffrey R. Norman; Elizabeth Howey

Purpose Others have suggested that increased time pressure, sometimes caused by interruptions, may result in increased diagnostic errors. The authors previously found, however, that increased time pressure alone does not result in increased errors, but they did not test the effect of interruptions. It is unclear whether experience modulates the combined effects of time pressure and interruptions. This study investigated whether increased time pressure, interruptions, and experience level affect diagnostic accuracy and response time. Method In October 2012, 152 residents were recruited at five Medical Council of Canada Qualifying Examination Part II test sites. Forty-six emergency physicians were recruited from one Canadian and one U.S. academic health center. Participants diagnosed 20 written general medicine cases. They were randomly assigned to receive fast (time pressure) or slow condition instructions. Visual and auditory case interruptions were manipulated as a within-subject factor. Results Diagnostic accuracy was not affected by interruptions or time pressure but was related to experience level: Emergency physicians were more accurate (71%) than residents (43%) (F = 234.0, P < .0001) and responded more quickly (54 seconds) than residents (65 seconds) (F = 9.0, P < .005). Response time was shorter for participants in the fast condition (55 seconds) than in the slow condition (73 seconds) (F = 22.2, P < .0001). Interruptions added about 8 seconds to response time. Conclusions Experienced emergency physicians were both faster and more accurate than residents. Instructions to proceed quickly and interruptions had a small effect on response time but no effect on accuracy.


Academic Medicine | 2007

Medical school admissions: revisiting the veracity and independence of completion of an autobiographical screening tool.

Mark D. Hanson; Kelly L. Dore; Harold I. Reiter; Kevin W. Eva

Background Some form of candidate-written autobiographical submission (ABS) is commonly used before interviews to screen candidates to medical school on the basis of their noncognitive characteristics. However, confidence in the validity of these measures has been questioned. Method In 2005, applicants to McMaster University completed an off-site ABS before being interviewed and an on-site ABS at interview. Five off-site ABS questions were completed, plus eight on-site questions. On-site ABS questions were answered in variable timing conditions. ABS ratings were compared across sites and time allowed for completion. Results Off-site ABS ratings were higher than on-site ratings, and the two sets of ratings were uncorrelated with one another. On-site ABS ratings increased with increased time allowed for completion, but the reliability of the measure was unaffected by this variable. Conclusions Confidence that candidates independently answer preinterview ABS questions is weak. To improve ABS validity, modification of the current Web-based submission format warrants consideration.


Medical Education | 2017

Contexts, concepts and cognition: principles for the transfer of basic science knowledge

Kulamakan Kulasegaram; Zarah Chaudhary; Nicole N. Woods; Kelly L. Dore; Alan J. Neville; Geoffrey Norman

Transfer of basic science aids novices in the development of clinical reasoning. The literature suggests that although transfer is often difficult for novices, it can be optimised by two complementary strategies: (i) focusing learners on conceptual knowledge of basic science or (ii) exposing learners to multiple contexts in which the basic science concepts may apply. The relative efficacy of each strategy as well as the mechanisms that facilitate transfer are unknown. In two sequential experiments, we compared both strategies and explored mechanistic changes in how learners address new transfer problems.

Collaboration


Dive into the Kelly L. Dore's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kevin W. Eva

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge