Medical Education | 2021

Standard setting using programmatic thinking for small cohort performance‐based exams

 
 
 

Abstract


Setting standards for determining competent performance on examinations is wellestablished in medical education. Establishing a conceptual boundary that differentiates competent from noncompetent testtakers typically involves either expert judgement based on test item characteristics (eg Angoff), or an empirical process based on examinee performance (eg Borderline regression).1 Standardsetting procedures are a mandatory component of assessment processes in Australian specialist medical training programmes. However, small specialist medical Colleges cannot implement existing standardsetting methods for performancebased examinations. Angofftype methods are often not suited for performance examinations and empirical methods are unsuitable for small cohorts. A reform of assessment processes undertaken by the Royal Australian College of Dental Surgeons (RACDS) Oral and Maxillofacial Surgery (OMS) final examinations required implementation of appropriate standardsetting processes; however, the chosen method had to work with small candidate numbers and small numbers of examiners to support the process.

Volume 55
Pages None
DOI 10.1111/medu.14516
Language English
Journal Medical Education

Full Text