Maggie Bartlett
Keele University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Maggie Bartlett.
The Clinical Teacher | 2013
Maggie Bartlett; Robert K McKinley
Background: Keele Medical School’s new curriculum includes a 5‐week course to extend medical students’ consultation skills beyond those historically required for competent inductive diagnosis.
Education for primary care | 2012
Robert K McKinley; Maggie Bartlett; Peter Coventry; Sh Gibson; Richard Hays; Rg Jones
The leadership theme has clearly inspired you and we hear about four innovatory programmes. First off is an overview about the work of the Royal College of General Practitioners (RCGP) in this area. As one would expect, the RCGP is itself a leader and has developed a strategy for the development of leadership skills. Next we hear about an innovative academic unit. Kay Mohanna and colleagues describe the work of the Keele Clinical Leadership Academy and its remarkable work in all areas of leadership development. Finally we hear from two deaneries. Tim Swanwick and colleagues from the London Deanery describe a variety of commissioning projects that are open to some GP trainees. Some of these are also open to healthcare workers from other professions. We end with an account from Elizabeth Alden about the educational scholars in the Severn Deanery. The evaluation of this scheme is described in the paper by Paul Main and colleagues on pages 50–6.
Education for primary care | 2018
Maggie Bartlett; Eliot Rees; Robert K McKinley
Abstract Background: Keele Medical School has a small accommodation hub for students placed within ten associated general practices in a predominantly rural area of England. Groups of up to eleven final year students spend fifteen weeks learning generic and transferable clinical skills in these practices. Aim: To explore the evolving perceptions on students on their experiences throughout their placements. Method: All ten students placed at the hub between August and December 2013 were invited to participate in focus groups in weeks zero, seven, and fifteen. Analysis was qualitative and thematic. Results: Ten, five and eight students chose to participate in successive focus groups. Five themes were identified from the data; acceptance, learning opportunities, relationships, development, and injustice with a subtheme of isolation. Conclusion: The placements had a powerful impact on students’ learning and development. Their perceptions changed from seeing themselves as ‘knowledge leeches’ to legitimate contributors to health care over the course of fifteen weeks. They did not recognise that managing perceived adversity led to personal development. This illustrates the need to both identify perceived adversity and explicitly signpost and scaffold life learning. The students described experiences which challenged them intellectually and offered them opportunities to recognise the breadth and complexity of general practice.
Medical Education | 2016
Maggie Bartlett; Flora Bartlett
We conducted an ethnography of the faculty biscuit tin as we were interested in the lived experience of the biscuits contained within it. We used a constructivist epistemology, a social constructionist interpretive framework and a phenomenological methodology that included analysis from the perspectives of deixis and cosmology. The biscuits perceived that they were important to a selecting force and that the characteristics of one particular group had a specific value to the selector. Some enduring benefits may derive from the selection of this group, although its attractions were less immediately obvious than those of others. What is immediately attractive may not be the most fit for purpose; lessons for the selection of medical students may arise from this exploration of the selection experiences of biscuits in a faculty biscuit tin.
Education for primary care | 2016
Maggie Bartlett; Jessica Potts; Bob McKinley
Abstract Keele medical students spend 113 days in general practices over our five-year programme. We collect practice data thought to indicate good quality teaching. We explored the relationships between these data and two outcomes for students; Objective Structured Clinical Examination (OSCE) scores and feedback regarding the placements. Though both are surrogate markers of good teaching, they are widely used. We collated practice and outcome data for one academic year. Two separate statistical analyses were carried out: (1) to determine how much of the variation seen in the OSCE scores was due to the effect of the practice and how much to the individual student. (2) to identify practice characteristics with a relationship to student feedback scores. (1) OSCE performance: 268 students in 90 practices: six quality indicators independently influenced the OSCE score, though without linear relationships and not to statistical significance. (2) Student satisfaction: 144 students in 69 practices: student feedback scores are not influenced by practice characteristics. The relationships between the quality indicators we collect for practices and outcomes for students are not clear. It may be that neither the quality indicators nor the outcome measures are reliable enough to inform decisions about practices’ suitability for teaching.
The Clinical Teacher | 2018
Maggie Bartlett; Ruth Kinston; Robert K McKinley
Untimed simulated primary care consultations focusing on safe and effective clinical outcomes were first introduced into undergraduate medical education in Otago, New Zealand, in 2004. We extended this concept and included a secondary care version for final‐year students. We offer students opportunities to manage entire consultations, which include making and implementing clinical decisions with simulated patients (SPs). Formative feedback is given by SPs on the achievement of pre‐determined outcomes and by faculty members on clinical decision making, medical record keeping and case presentation.
Medical Education | 2018
Maggie Bartlett
curriculum. Acad Med 2014;89 (4):691. 4 Norcini J, Burch V. Workplacebased assessment as an educational tool: AMEE Guide No. 31. Med Teach 2007;29:855–71. 5 Hauer KE, ten Cate O, Boscardin C, Irby DM, Iobst W, O’Sullivan PS. Understanding trust as an essential element of trainee supervision and learning in the workplace. Adv Health Sci Educ Theory Pract 2014;19:435–56. 6 Sagasser MH, Fluit C, van Weel C, van der Vleuten CPM, Kramer A. How entrustment is informed by holistic judgments across time in a family medicine residency program: an ethnographic nonparticipant observational study. Acad Med 2017;92 (6):792–9. 7 Bernabeo EC, Holtman MC, Ginsburg S, Rosenbaum JR, Holmboe ES. Lost in transition: the experience and impact of frequent changes in the inpatient learning environment. Acad Med 2011;86:591–8. 8 Sheu L, Kogan JR, Hauer KE. How supervisor experience influences trust, supervision, and trainee learning: a qualitative study. Acad Med 2017;92 (9):1320–7. 9 Goldszmidt M, Faden L, Dornan T, van Merri€enboer J, Bordage G, Lingard L. Attending physician variability: a model of four supervisory styles. Acad Med 2015;90 (11):1541–6. 10 George BC, Teitelbaum EN, Meyerson SL, Schuller MC, DaRosa DA, Petrusa ER, Petito LC, Fryer JP. Reliability, validity, and feasibility of the Zwisch scale for the assessment of intraoperative performance. J Surg Educ 2014;71 (6):e90–6. 11 Watling C, LaDonna KA, Lingard L, Voyer S, Hatala R. ‘Sometimes the work just needs to be done’: socio-cultural influences on direct observation in medical training. Med Educ 2016;50 (10):1054–64. 12 Gingerich A, Daniels V, Farrell L, Olsen S-R, Kennedy T, Hatala R. Beyond hands-on and hands-off: supervisory approaches and entrustment on the inpatient ward. Med Educ 2018;52(10):1028– 40. 13 Yeates P, O’Neill P, Mann K, Eva K. Seeing the same thing differently: mechanisms that contribute to assessor differences in directly observed performance assessments. Adv Health Sci Educ Theory Pract 2013;18:325–41. 14 Driessen E, Scheele F. What is wrong with assessment in postgraduate training? Lessons from clinical practice and educational research. Med Teach 2013;35:569–74. 15 Rekman J, Gofton W, Dudek N, Gofton T, Hamstra SJ. Entrustability scales: outlining their usefulness for competencybased clinical assessment. Acad Med 2016;91 (2):186–90.
Education for primary care | 2018
Robert K McKinley; Maggie Bartlett; Sh Gibson; A. Panesar; Matthew Webb
Abstract We describe and evaluate an innovative immersive 15 week final year assistantship in general practice. Evaluation data was taken from five years of routinely collected School data and available national comparative data. The assistantship aims to enable students to consolidate knowledge and hone their skills through central participation in the care of large numbers of patients with acute and long term conditions. We estimate that most students consulted with over 450 patients during the assistantship. Students report that they became useful to their practice teams, had multiple episodes of feedback on their performance which they found useful and, in the school exit survey, reported that they were highly prepared for practice. 9.4 per cent of students reported that the assistantship was ‘too long’ and, especially those who completed the assistantship in the second semester, they were out of hospital for too long before F1. Some described a learning ‘plateau’ after the 10th week which was addressed by modifications to the assistantship. Nevertheless, in national surveys, our graduates’ self-reported preparedness for practice is high, a perception shared by their F1 supervisors. General practice can make a valuable contribution to the education of senior medical students and contribute to their preparedness for practice.
International Journal of Medical Education | 2017
Janet Lefroy; Nicola Roberts; Adrian Molyneux; Maggie Bartlett; Robert K McKinley
Objectives To determine whether an app-based software system to support production and storage of assessment feedback summaries makes workplace-based assessment easier for clinical tutors and enhances the educational impact on medical students. Methods We monitored our workplace assessor app’s usage by Year 3 to 5 medical students in 2014-15 and conducted focus groups with Year 4 medical students and interviews with clinical tutors who had used the apps. Analysis was by constant comparison using a framework based on elements of van der Vleuten’s utility index. Results The app may enhance the content of feedback for students. Using a screen may be distracting if the app is used during feedback discussions. Educational impact was reduced by students’ perceptions that an easy-to-produce feedback summary is less valuable than one requiring more tutor time and effort. Tutors’ typing, dictation skills and their familiarity with mobile devices varied. This influenced their willingness to use the assessment and feedback mobile app rather than the equivalent web app. Electronic feedback summaries had more real and perceived uses than anticipated both for tutors and students including perceptions that they were for the school rather than the student. Conclusions Electronic workplace-based assessment systems can be acceptable to tutors and can make giving detailed written feedback more practical but can interrupt the social interaction required for the feedback conversation. Tutor training and flexible systems will be required to minimise unwanted consequences. The educational impact on both tutors and students of providing pre-formulated advice within the app is worth further study.
Education for primary care | 2017
Maggie Bartlett; Jim Crossley; Robert K McKinley
Abstract Background: Educational feedback is amongst the most powerful of all learning interventions. Research questions: (1) Can we measure the quality of written educational feedback with acceptable metrics? (2) Based on such a measure, does a quality improvement (QI) intervention improve the quality of feedback? Study design: We developed a QI instrument to measure the quality of written feedback and applied it to written feedback provided to medical students following workplace assessments. We evaluated the measurement characteristics of the QI score using generalisability theory. In an uncontrolled intervention, QI profiles were fed back to GP tutors and pre and post intervention scores compared. Study results: A single assessor scoring 6 feedback summaries can discriminate between practices with a reliability of 0.82.The quality of feedback rose for two years after the introduction of the QI instrument and stabilised in the third year. The estimated annual cost to provide this feedback is £12 per practice. Interpretation and recommendations: It is relatively straightforward and inexpensive to measure the quality of written feedback with good reliability. The QI process appears to improve the quality of written feedback. We recommend routine use of a QI process to improve the quality of educational feedback.