Peter Yeates
University of Manchester
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Peter Yeates.
Advances in Health Sciences Education | 2013
Peter Yeates; Paul O’Neill; Karen Mann; Kevin W. Eva
Assessors’ scores in performance assessments are known to be highly variable. Attempted improvements through training or rating format have achieved minimal gains. The mechanisms that contribute to variability in assessors’ scoring remain unclear. This study investigated these mechanisms. We used a qualitative approach to study assessors’ judgements whilst they observed common simulated videoed performances of junior doctors obtaining clinical histories. Assessors commented concurrently and retrospectively on performances, provided scores and follow-up interviews. Data were analysed using principles of grounded theory. We developed three themes that help to explain how variability arises: Differential Salience—assessors paid attention to (or valued) different aspects of the performances to different degrees; Criterion Uncertainty—assessors’ criteria were differently constructed, uncertain, and were influenced by recent exemplars; Information Integration—assessors described the valence of their comments in their own unique narrative terms, usually forming global impressions. Our results (whilst not precluding the operation of established biases) describe mechanisms by which assessors’ judgements become meaningfully-different or unique. Our results have theoretical relevance to understanding the formative educational messages that performance assessments provide. They give insight relevant to assessor training, assessors’ ability to be observationally “objective” and to the educational value of narrative comments (in contrast to numerical ratings).
Medical Education | 2014
Andrea Gingerich; Jennifer R. Kogan; Peter Yeates; Marjan J. B. Govaerts; Eric S. Holmboe
Performance assessments, such as workplace‐based assessments (WBAs), represent a crucial component of assessment strategy in medical education. Persistent concerns about rater variability in performance assessments have resulted in a new field of study focusing on the cognitive processes used by raters, or more inclusively, by assessors.
Medical Education | 2013
Peter Yeates; Paul O'Neill; Karen Mann; Kevin W. Eva
A recent study has suggested that assessors judge performance comparatively rather than against fixed standards. Ratings assigned to borderline trainees were found to be biased by previously seen candidates’ performances. We extended that programme of investigation by examining these effects across a range of performance levels. Furthermore, we investigated whether confidence in the rating assigned predicts susceptibility to manipulation and whether prompting consideration of typical performance lessens the influence of recent experience.
Medical Education | 2008
Peter Yeates; Jane Stewart; J. Roger Barton
Context Despite myriad advances in medical education, we have not yet established a universally accepted set of attributes we can reasonably expect from our teachers.
Medical Education | 2015
Peter Yeates; Jenna Cardell; Gerard J. Byrne; Kevin W. Eva
In prior research, the scores assessors assign can be biased away from the standard of preceding performances (i.e. ‘contrast effects’ occur).
Academic Medicine | 2015
Peter Yeates; Marc Moreau; Kevin W. Eva
Purpose Laboratory studies have shown that performance assessment judgments can be biased by “contrast effects.” Assessors’ scores become more positive, for example, when the assessed performance is preceded by relatively weak candidates. The authors queried whether this effect occurs in real, high-stakes performance assessments despite increased formality and behavioral descriptors. Method Data were obtained for the 2011 United Kingdom Foundational Programme clinical assessment and the 2008 University of Alberta Multiple Mini Interview. Candidate scores were compared with scores for immediately preceding candidates and progressively distant candidates. In addition, average scores for the preceding three candidates were calculated. Relationships between these variables were examined using linear regression. Results Negative relationships were observed between index scores and both immediately preceding and recent scores for all exam formats. Relationships were greater between index scores and the average of the three preceding scores. These effects persisted even when examiners had judged several performances, explaining up to 11% of observed variance on some occasions. Conclusions These findings suggest that contrast effects do influence examiner judgments in high-stakes performance-based assessments. Although the observed effect was smaller than observed in experimentally controlled laboratory studies, this is to be expected given that real-world data lessen the strength of the intervention by virtue of less distinct differences between candidates. Although it is possible that the format of circuital exams reduces examiners’ susceptibility to these influences, the finding of a persistent effect after examiners had judged several candidates suggests that the potential influence on candidate scores should not be ignored.
Medical Teacher | 2017
Peter Yeates; Stefanie S. Sebok-Syer
Abstract Introduction: OSCEs are commonly conducted in multiple cycles (different circuits, times, and locations), yet the potential for students’ allocation to different OSCE cycles is rarely considered as a source of variance—perhaps in part because conventional psychometrics provide limited insight. Methods: We used Many Facet Rasch Modeling (MFRM) to estimate the influence of “examiner cohorts” (the combined influence of the examiners in the cycle to which each student was allocated) on students’ scores within a fully nested multi-cycle OSCE. Results: Observed average scores for examiners cycles varied by 8.6%, but model-adjusted estimates showed a smaller range of 4.4%. Most students’ scores were only slightly altered by the model; the greatest score increase was 5.3%, and greatest score decrease was −3.6%, with 2 students passing who would have failed. Discussion: Despite using 16 examiners per cycle, examiner variability did not completely counter-balance, resulting in an influence of OSCE cycles on students’ scores. Assumptions were required for the MFRM analysis; innovative procedures to overcome these limitations and strengthen OSCEs are discussed. Conclusions: OSCE cycle allocation has the potential to exert a small but unfair influence on students’ OSCE scores; these little-considered influences should challenge our assumptions and design of OSCEs.
Medical Education | 2011
Peter Yeates; Paul O’Neill; Karen V. Mann
transitions. Theoretical concepts from these areas may provide valuable insights for medical education research. Morrison, a researcher in organisational socialisation who studies how newcomers learn new behaviours and establish new relationships, identified the elements of ‘task mastery’ and ‘role clarification’, among others, as crucial to making a successful transition into a new setting. This resonates with the outcomes reported by Tallentire et al. ‘Task mastery’ refers to the process in which junior doctors engage as they try to resolve their difficulties in caring for acutely unwell patients; ‘role clarification’ relates to their assimilation of new roles and responsibilities as they take their place within the medical hierarchy.
Advances in Health Sciences Education | 2018
Andrea Gingerich; Edward Schokking; Peter Yeates
Recent literature places more emphasis on assessment comments rather than relying solely on scores. Both are variable, however, emanating from assessment judgements. One established source of variability is “contrast effects”: scores are shifted away from the depicted level of competence in a preceding encounter. The shift could arise from an effect on the range-frequency of assessors’ internal scales or the salience of performance aspects within assessment judgments. As these suggest different potential interventions, we investigated assessors’ cognition by using the insight provided by “clusters of consensus” to determine whether any change in the salience of performance aspects was induced by contrast effects. A dataset from a previous experiment contained scores and comments for 3 encounters: 2 with significant contrast effects and 1 without. Clusters of consensus were identified using F-sort and latent partition analysis both when contrast effects were significant and non-significant. The proportion of assessors making similar comments only significantly differed when contrast effects were significant with assessors more frequently commenting on aspects that were dissimilar with the standard of competence demonstrated in the preceding performance. Rather than simply influencing range-frequency of assessors’ scales, preceding performances may affect salience of performance aspects through comparative distinctiveness: when juxtaposed with the context some aspects are more distinct and selectively draw attention. Research is needed to determine whether changes in salience indicate biased or improved assessment information. The potential should be explored to augment existing benchmarking procedures in assessor training by cueing assessors’ attention through observation of reference performances immediately prior to assessment.
British Journal of Clinical Pharmacology | 2001
Peter Yeates; Simon H. L. Thomas
Collaboration
Dive into the Peter Yeates's collaboration.
University Hospital of South Manchester NHS Foundation Trust
View shared research outputs