Alison Sturrock
University College London
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alison Sturrock.
Medical Education | 2014
Kazuya Iwata; Daniel S Furmedge; Alison Sturrock; Deborah Gill
Peer‐assisted learning (PAL) is recognised as an effective learning tool and its benefits are well documented in a range of educational settings. Learners find it enjoyable and their performances in assessments are comparable with those of students taught by faculty tutors. In addition, PAL tutors themselves report the development of improved clinical skills and confidence through tutoring. However, whether tutoring leads to actual improvement in performance has not been fully investigated.
The Clinical Teacher | 2013
Alexander Nesbitt; Freya Baird; Benjamin Canning; Ann Griffin; Alison Sturrock
Workplace‐based assessment (WPBA) is key to medical education, providing a framework through which the trainee can be assessed and receive feedback in the clinical setting.
The Clinical Teacher | 2014
Alexander Nesbitt; Andrew Pitcher; Lydnon James; Alison Sturrock; Ann Griffin
Medical students value constructive feedback, as it helps them to improve their performance. Supervised learning events (SLEs) were developed as performance assessments and to create opportunities for students to receive feedback. Although many would argue the strengths of SLEs, there is a lack of literature assessing the quality of written feedback for medical students.
Medical Teacher | 2010
David Sales; Alison Sturrock; Katharine Boursicot; Jane Dacre
Background: The UK General Medical Council (GMC) in its regulatory capacity conducts formal tests of competence (TOCs) on doctors whose performance is of concern. TOCs are individually tailored to each doctors specialty and grade. Aims: To describe the development and implementation of an electronic blueprinting system that supports the delivery of TOCs. Method: A case study that describes the evolution of the GMC electronic blueprint including the derivation of its content and its functionality. Results: A question bank has been created with all items classified according to the competencies defined by Good Medical Practice. This database aids test assembly and ensures that each assessment maps across the breadth of the blueprint. Conclusions: The blueprint described was easy to construct and is easy to use. It reflects the knowledge, skills and behaviours (learning outcomes) to be assessed. It guides commissioning of test material and enables the systematic and faithful sampling of common and important problems. The principles described have potential for wider application to blueprinting in undergraduate or clinical training programmes. Such a blueprint can provide the essential link between a curriculum and its assessment system and ensure that assessment content is stable over time.
BMJ Open | 2014
Leila Mehdizadeh; Alison Sturrock; Gil Myers; Yasmin Khatib; Jane Dacre
Objective To investigate how accurately doctors estimated their performance on the General Medical Councils Tests of Competence pilot examinations. Design A cross-sectional survey design using a questionnaire method. Setting University College London Medical School. Participants 524 medical doctors working in a range of clinical specialties between foundation year two and consultant level. Main outcome measures Estimated and actual total scores on a knowledge test and Observed Structured Clinical Examination (OSCE). Results The pattern of results for OSCE performance differed from the results for knowledge test performance. The majority of doctors significantly underestimated their OSCE performance. Whereas estimated knowledge test performance differed between high and low performers. Those who did particularly well significantly underestimated their knowledge test performance (t (196)=−7.70, p<0.01) and those who did less well significantly overestimated (t (172)=6.09, p<0.01). There were also significant differences between estimated and/or actual performance by gender, ethnicity and region of Primary Medical Qualification. Conclusions Doctors were more accurate in predicating their knowledge test performance than their OSCE performance. The association between estimated and actual knowledge test performance supports the established differences between high and low performers described in the behavioural sciences literature. This was not the case for the OSCE. The implications of the results to the revalidation process are discussed.
Academic Medicine | 2009
Jane Dacre; Henry W. W. Potts; David Sales; Hilary Spencer; Alison Sturrock
The practice of clinical medicine is becoming increasingly specialized, and this change has increased the challenge of developing fair, valid, and reliable tests of knowledge, particularly for single candidates or small groups of candidates. The problem is particularly relevant to the UKs General Medical Councils Fitness to Practice procedures, which investigate individual doctors. In such cases, there is a need for an alternative to the conventional approach to reliability estimation that will still allow the delivery of reproducible and standardized tests. This report describes the three-year process (starting in 2005) of developing a knowledge test that can be tailored for individual doctors practicing in narrowly specialized fields or at various stages in their training. The process of test development for this study consisted of five stages: item writing, to create individual questions; blueprinting, to establish the content and context that each item might test; standard setting, to calculate for each question a theoretical probability that a doctor of just-adequate capability would answer the question correctly; reference data collection, to determine for each item the distribution of scores to be expected from a large population of doctors in good standing; and test assembly, to select sets of questions that together formed complete and balanced tests. Tailored testing is a valid, feasible, and reproducible method of assessing the knowledge of one doctor or small groups of doctors who are practicing in narrow or subspecialty areas.
Postgraduate Medical Journal | 2007
Judith Cave; Deirdre Wallace; Glenda Baillie; Michael Klingenberg; Catherine Phillips; Harriet Oliver; Katherine Rowles; Lisa Dunkley; Alison Sturrock; Jane Dacre
Background: Newly qualified doctors should be competent in advanced life support (ALS) and critical care. The Resuscitation Council has published a course about ALS for undergraduate medical students (the intermediate life support (ILS) course). However, there is no undergraduate-level course on assessing and treating critically ill patients, despite the fact that postgraduate courses on this topic are extremely popular. We have developed a new course called Direct Response Workshop for House Officer Preparation (DR WHO), which teaches both ALS and critical care at an undergraduate level. Methods: We taught the Resuscitation Council ILS course to our 2003–4 cohort of final year medical students (n = 350), and the new course (DR WHO) to our 2004–5 cohort (n = 338). Students filled in feedback forms immediately after the courses, and a subset repeated the feedback forms 4 months after they had started work as house officers. Course evaluation: Student and house officer feedback was positive. The DR WHO cohort was more confident in caring for critically ill patients (18/26 (69%) were confident after ILS, and 40/45 (89%) were confident after DR WHO (χ2 = 4.3; df = 1; p = 0.06)). Both cohorts were competent in ALS, each with a mean score of 18.6/20 in a finals level practical examination on this topic. Conclusions: The DR WHO course is popular with the students and practical to run. The course needs to be re-evaluated to determine the long-term competency of graduates.
BMC Medical Education | 2015
Leila Mehdizadeh; Alison Sturrock; Jane Dacre
BackgroundThe General Medical Council’s Fitness to Practise investigations may involve a test of competence for doctors with performance concerns. Concern has been raised about the suitability of the test format for doctors who qualified before the introduction of Single Best Answer and Objective Structured Clinical Examination assessments, both of which form the test of competence. This study explored whether the examination formats used in the tests of competence are fair to long standing doctors who have undergone fitness to practise investigation.MethodsA retrospective cohort design was used to determine an association between year of primary medical qualification and doctors’ test of competence performance. Performance of 95 general practitioners under investigation was compared with a group of 376 volunteer doctors. We analysed performance on knowledge test, OSCE overall, and three individual OSCE stations using Spearman’s correlation and regression models.ResultsDoctors under investigation performed worse on all test outcomes compared to the comparison group. Qualification year correlated positively with performance on all outcomes except for physical examination (e.g. knowledge test r = 0.48, p < 0.001 and OSCE r = 0.37, p < 0.001). Qualification year was associated with test performance in doctors under investigation even when controlling for sex, ethnicity and qualification region. Regression analyses showed that qualification year was associated with knowledge test, OSCE and communication skills performance of doctors under investigation when other variables were controlled for. Among volunteer doctors this was not the case and their performance was more strongly related to where they qualified and their ethnic background. Furthermore, volunteer doctors who qualified before the introduction of Single Best Answer and OSCE assessments, still outperformed their peers under investigation.ConclusionsEarlier graduates under fitness to practise investigation performed less well on the test of competence than their more recently qualified peers under investigation. The performance of the comparator group tended to stay consistent irrespective of year qualified. Our results suggest that the test format does not disadvantage early qualified doctors. We discuss findings in relation to the GMC’s fitness to practise procedures and suggest alternative explanations for the poorer performance of long standing doctors under investigation.
BMC Medical Education | 2017
Leila Mehdizadeh; Henry W. W. Potts; Alison Sturrock; Jane Dacre
BackgroundThe demographics of doctors working in the UK are changing. The United Kingdom (UK) has voted to leave the European Union (EU) and there is heightened political discourse around the world about the impact of migration on healthcare services. Previous work suggests that foreign trained doctors perform worse than UK graduates in postgraduate medical examinations. We analysed the prevalence by country of primary medical qualification of doctors who were required to take an assessment by the General Medical Council (GMC) because of performance concerns.MethodsThis was a retrospective cohort analysis of data routinely collected by the GMC. We compared doctors who had a GMC performance assessment between 1996 and 2013 with the medical register in the same period. The outcome measures were numbers experiencing performance assessments by country or region of medical qualification.ResultsThe rate of performance assessment varied significantly by place of medical qualification and by year; χ2(17) = 188, p < 0.0001, pseudo-R2 = 15%. Doctors who trained outside of the UK, including those trained in the European Economic Area (EEA), were more likely to have a performance assessment than UK trained doctors, with the exception of South African trained doctors.ConclusionsThe rate of performance assessment varies significantly by place of medical qualification. This is the first study to explore the risk of performance assessment by individual places of medical qualification. While concern has largely focused on the competence of non-EEA, International Medical Graduates, we discuss implications for how to ensure European trained doctors are fit to practise before their medical licence in the UK is granted. Further research is needed to investigate whether these country effects hold true when controlling for factors like doctors’ sex, age, length of time working in the UK, and English language skills. This will allow evidence-based decisions to be made around the regulatory environment the UK should adopt once it leaves the EU. Patients should be reassured that the vast majority of all doctors working in the UK are competent.
Postgraduate Medical Journal | 2014
Leila Mehdizadeh; Alison Sturrock; Gil Myers; Yasmin Khatib; Jane Dacre
Background Doctors who are investigated by the General Medical Council for performance concerns may be required to take a Test of Competence (ToC). The tests are piloted on volunteer doctors before they are used in Fitness to Practise (FtP) investigations. Objectives To find out who volunteers to take a pilot ToC and why. Methods This was a retrospective cohort study. Between February 2011 and October 2012 we asked doctors who volunteered for a test to complete a questionnaire about their reasons for volunteering and recruitment. We analysed the data using descriptive statistics and Pearsons χ2 test. Results 301 doctors completed the questionnaire. Doctors who took a ToC voluntarily were mostly women, of white ethnicity, of junior grades, working in general practice and who held a Primary Medical Qualification (PMQ) from the UK. This was a different population to doctors under investigation and all registered doctors in the UK. Most volunteers heard about the General Medical Councils pilot events through email from a colleague and used the experience to gain exam practice for forthcoming postgraduate exams. Conclusions The reference groups of volunteers are not representative of doctors under FtP investigation. Our findings will be used to inform future recruitment strategies with the aim to encourage better matching of groups who voluntarily pilot a ToC with those under FtP investigation.