Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where J. P. W. Cunnington is active.

Publication


Featured researches published by J. P. W. Cunnington.


Journal of Continuing Education in The Health Professions | 2006

The Difficulty with Experience: Does Practice Increase Susceptibility to Premature Closure?.

Kevin W. Eva; J. P. W. Cunnington

Introduction: A recent review of the physician performance literature concluded that the risk of prematurely closing ones diagnostic search increases with years of experience. To minimize confounding variables and gain insight into cognitive issues relevant to continuing education, the current study was performed to test this conclusion. Methods: Physician participants were shown a series of case histories and asked to judge the probability of a pair of diagnoses. The order in which features were presented was manipulated across participants and the probabilities compared to determine the impact of information order. Two groups of participants were recruited, 1 older than and 1 younger than 60 years. Results: The probability assigned to a diagnosis tended to be greater when features consistent with that diagnosis preceded those consistent with an alternative than when the same features followed those consistent with the alternative. Older participants revealed a greater primacy effect than less experienced participants across 4 experimental conditions. Discussion: Physicians with greater experience appear to weigh their first impressions more heavily than those with less experience. Educators should design instructional activities that account for experience‐specific cognitive tendencies.


Advances in Health Sciences Education | 1997

The Risks of Thoroughness: Reliability and Validity of Global Ratings and Checklists in an OSCE

J. P. W. Cunnington; Alan J. Neville; Geoff Norman

Objective: To compare checklists against global ratings for student performance on each station in an OSCE without the confounder of the global rating scorer having first filled in the checklist. Method: Subjects were 96 medical students completing their pre-clinical studies, who took an 8 station clinical OSCE. 39 students were assessed with detailed performance checklists; 57 students went through the same stations but were assessed using only a single global rating per station. A subset of 39 students were assessed by two independent raters. Results: Inter-rater and inter-station reliability of the global rating was the same as for the checklist. Correlation with a concurrent multiple choice test was similar for both formats. Conclusion: The global rating was found to be as reliable as more traditional checklist scoring. A discussion of the validity of checklist and global scores suggests that global ratings may be superior.


Academic Medicine | 2000

Cognitive Difficulty in Physicians

John Turnbull; Ramona Carbotte; Eileen Hanna; Geoffrey R. Norman; J. P. W. Cunnington; Blair Ferguson; Tiina Kaigas

Purpose Remediation of some incompetent physicians has proven difficult or impossible. The authors sought to determine whether physicians with impaired competency had neuropsychological impairment sufficient to explain their incompetence and their failure to improve with remedial continuing medical education (CME). Method During a one-year period, 1996–97, all 27 participants in the Physician Review Program (PREP) conducted at McMaster University, a physician competency assessment program, undertook a detailed neuropsychological screening battery. Results Nearly all physicians assessed as competent also performed well on the neuropsychological testing. However, a significant number (about one third) of the physicians who performed poorly on the competency assessment had neuropsychological impairments sufficient to explain their poor performances. The difficulties were more marked in elderly physicians. Conclusion A significant minority of incompetent physicians have cognitive impairments sufficient to explain both their incompetence and, probably, their failure to improve with remedial CME. Testing physicians for these impairments is important: to detect and treat reversible conditions, to manage irreversible conditions that preclude successful educational intervention, and to facilitate compensation in this instance. Serious consideration should be given to the incorporation of neuropsychological screening in all intensive physician review programs.


Academic Medicine | 2006

COMPETENCE AND COGNITIVE DIFFICULTY IN PHYSICIANS: A FOLLOW UP STUDY

John Turnbull; J. P. W. Cunnington; Ayse Unsal; Geoff Norman; Blair Ferguson

Purpose Remediation of incompetent physicians has proven difficult and sometimes impossible. The authors wished to determine whether such physicians had neuropsychological impairment sufficient to explain their incompetence and their failure to improve after remedial continuing medical education (CME). Method Between 1997 and 2001, the authors undertook neuropsychological screening of 45 participants of a physician competency assessment program. For those physicians reassessed after a period of remediation, the authors relate the findings of the physicians’ competence reassessments to their neuropsychological scores. Results Nearly all physicians performing well on competency assessment had no or mild cognitive impairment. Conversely, a significant number of physicians performing poorly on competency assessment had sufficient neuropsychological difficulty to explain their poor performance. The cognitive impairment was more marked in elderly physicians, and referencing the neuropsychological scores to an age-matched normative population underestimates the impairment. No physician with moderate or severe neuropsychological dysfunction had successful competency reassessment. Increasing age was associated with poor performance on competency testing, but was less strongly associated with unsuccessful reassessment. Conclusion A large minority of the physicians who fell significantly below desired levels of competence had cognitive impairment sufficient to explain their lack of competence and their failure to improve with remedial CME.


Academic Medicine | 1996

Expert-Novice Differences in the Use of History and Visual Information from Patients

Geoff Norman; Lee R. Brooks; J. P. W. Cunnington; V. Shali; Marriott M; Glenn Regehr

No abstract available.


Medical Teacher | 2002

Evolution of student assessment in McMaster University's MD Programme

J. P. W. Cunnington

In response to the competitive examination culture that pervaded medical education in the 1940s and 1950s the founders of McMaster Universitys new MD Programme created an assessment system based on group functioning within the tutorial. While the tutorial has served the educational process well, 30 years of experience has highlighted its deficiencies as an assessment tool. This paper describes the accumulation of evidence that led to the awareness of the weakness of tutorial assessment and the attempts to provide reliable assessment by the reintroduction of examinations, but in novel formats which would not alter the goals of the curriculum.


Academic Medicine | 1997

The effect of presentation order in clinical decision making

J. P. W. Cunnington; Turnbull Jm; Glenn Regehr; Marriott M; Geoff Norman

No abstract available.


Teaching and Learning in Medicine | 2000

Assessing the Measurement Properties of a Clinical Reasoning Exercise

Timothy J. Wood; J. P. W. Cunnington; Geoffrey R. Norman

Background: A challenge for Problem-Based Learning (PBL) schools is to introduce reliable, valid, and cost-effective testing methods into the curriculum in such a way as to maximize the potential benefits of PBL while avoiding problems associated with assessment techniques like multiple-choice question, or MCQ, tests. Purpose: We document the continued development of an exam that was designed to satisfy the demands of both PBL and the scientific principles of measurement. Methods: A total of 102 medical students wrote a clinical reasoning exercise (CRE) as a requirement for two consecutive units of instruction. Each CRE consisted of a series of 18 short clinical problems designed to assess a students knowledge of the mechanism of diseases that were covered in three subunits located within each unit. Responses were scored by a students tutor and a 2nd crossover tutor. Results: Generalizability coefficients for raters, subunits, and individual problems were low, but the reliability of the overall test scores and the reliability of the scores across 2 units of instruction were high. Subsequent analyses found that the crossover tutors ratings were lower than the ratings provided by ones own tutor, and the CRE correlated with the biology component of a progress test. Conclusion: The magnitude of the generalizability coefficients demonstrates that the CRE is capable of detecting differences in reasoning across knowledge domains and is therefore a useful evaluation tool.


Academic Medicine | 1996

Development of Clinical Reasoning Exercises in a Problem-Based Curriculum

Alan J. Neville; J. P. W. Cunnington; Geoff Norman

No abstract available.


Academic Medicine | 1996

Applying learning taxonomies to test items: is a fact an artifact?

J. P. W. Cunnington; Geoff Norman; J. M. Blake; W. D. Dauphinee; David Blackmore

No abstract available.

Collaboration


Dive into the J. P. W. Cunnington's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Glenn Regehr

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Kevin W. Eva

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Blackmore

Medical Council of Canada

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge