Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Julian Archer is active.

Publication


Featured researches published by Julian Archer.


Medical Education | 2010

State of the science in health professional education: effective feedback

Julian Archer

Backgroundu2002 Effective feedback may be defined as feedback in which information about previous performance is used to promote positive and desirable development. This can be challenging as educators must acknowledge the psychosocial needs of the recipient while ensuring that feedback is both honest and accurate. Current feedback models remain reductionist in their approach. They are embedded in the hierarchical, diagnostic endeavours of the health professions. Even when it acknowledges the importance of two‐way interactions, feedback often remains an educator‐driven, one‐way process.


BMJ | 2010

Impact of workplace based assessment on doctors’ education and performance: a systematic review

Alice Miller; Julian Archer

Objective To investigate the literature for evidence that workplace based assessment affects doctors’ education and performance. Design Systematic review. Data sources The primary data sources were the databases Journals@Ovid, Medline, Embase, CINAHL, PsycINFO, and ERIC. Evidence based reviews (Bandolier, Cochrane Library, DARE, HTA Database, and NHS EED) were accessed and searched via the Health Information Resources website. Reference lists of relevant studies and bibliographies of review articles were also searched. Review methods Studies of any design that attempted to evaluate either the educational impact of workplace based assessment, or the effect of workplace based assessment on doctors’ performance, were included. Studies were excluded if the sampled population was non-medical or the study was performed with medical students. Review articles, commentaries, and letters were also excluded. The final exclusion criterion was the use of simulated patients or models rather than real life clinical encounters. Results Sixteen studies were included. Fifteen of these were non-comparative descriptive or observational studies; the other was a randomised controlled trial. Study quality was mixed. Eight studies examined multisource feedback with mixed results; most doctors felt that multisource feedback had educational value, although the evidence for practice change was conflicting. Some junior doctors and surgeons displayed little willingness to change in response to multisource feedback, whereas family physicians might be more prepared to initiate change. Performance changes were more likely to occur when feedback was credible and accurate or when coaching was provided to help subjects identify their strengths and weaknesses. Four studies examined the mini-clinical evaluation exercise, one looked at direct observation of procedural skills, and three were concerned with multiple assessment methods: all these studies reported positive results for the educational impact of workplace based assessment tools. However, there was no objective evidence of improved performance with these tools. Conclusions Considering the emphasis placed on workplace based assessment as a method of formative performance assessment, there are few published articles exploring its impact on doctors’ education and performance. This review shows that multisource feedback can lead to performance improvement, although individual factors, the context of the feedback, and the presence of facilitation have a profound effect on the response. There is no evidence that alternative workplace based assessment tools (mini-clinical evaluation exercise, direct observation of procedural skills, and case based discussion) lead to improvement in performance, although subjective reports on their educational impact are positive.


Medical Education | 2009

Initial evaluation of the first year of the Foundation Assessment Programme

Helena Davies; Julian Archer; Lesley Southgate; John J. Norcini

Objectivesu2002 This study represents an initial evaluation of the first year (F1) of the Foundation Assessment Programme (FAP), in line with Postgraduate Medical Education and Training Board (PMETB) assessment principles.


Archives of Disease in Childhood | 2010

Assuring validity of multisource feedback in a national programme

Julian Archer; Mary McGraw; Helena Davies

Objective To report the evidence for and challenges to the validity of Sheffield Peer Review Assessment Tool (SPRAT) with paediatric Specialist Registrars (SpRs) across the UK as part of Royal College of Paediatrics and Child Health workplace based assessment programme. Design Quality assurance analysis, including generalisability, of a multisource feedback questionnaire study. Setting All UK Deaneries between August 2005 and May 2006. Participants 577 year 2 and 4 Paediatric SpRs. Interventions Trainees were evaluated using SPRAT sent to clinical colleagues of their choosing. Data were analysed reporting totals, means and SD, and year groups were compared using independent t tests. A factor analysis was undertaken. Reliability was estimated using generalisability theory. Trainee and assessor demographic details were explored to try to explain variability in scores. Main outcome measures 4770 SPRAT assessments were provided about 577 paediatric SpRs. The mean scores between years were significantly different (Year 2 mean=5.08, SD=0.34, Year 4 mean=5.18, SD=0.34). A factor analysis returned a two-factor solution, clinical care and psychosocial skills. The 95% CI showed that trainees scoring ≥4.3 with nine assessors can be seen as achieving satisfactory performance with statistical confidence. Consultants marked trainees significantly lower (t=−4.52) whereas Senior House Officers and Foundation doctors scored their SpRs significantly higher (SHO t=2.06, Foundation t=2.77). Conclusions There is increasing evidence that multisource feedback (MSF) assesses two generic traits, clinical care and psychosocial skills. The validity of MSF is threatened by systematic bias, namely leniency bias and the seniority of assessors. Unregulated self-selection of assessors needs to end.


Medical Education | 2008

Specialty-specific multi-source feedback: assuring validity, informing training.

Helena Davies; Julian Archer; Adrian C Bateman; Sandra Dewar; Jim Crossley; Janet Grant; Lesley Southgate

Contextu2002 The white paper ‘Trust, Assurance and Safety: the Regulation of Health Professionals in the 21st Century’ proposes a single, generic multi‐source feedback (MSF) instrument in the UK. Multi‐source feedback was proposed as part of the assessment programme for Year 1 specialty training in histopathology.


Archives of Disease in Childhood | 2010

Assessment of doctors‘ consultation skills in the paediatric setting: the Paediatric Consultation Assessment Tool.

Rachel J Howells; Helena Davies; Jonathan Silverman; Julian Archer; A F Mellon

Objective To determine the utility of a novel Paediatric Consultation Assessment Tool (PCAT). Design Developed to measure clinicians communication behaviour with children and their parents/guardian, PCAT was designed according to consensus guidelines and refined at a number of stages. Volunteer clinicians provided videotaped real consultations. Assessors were trained to score communication skills using PCAT, a novel rating scale. Setting Eight UK paediatric units. Participants 19 paediatricians collected video-recorded material; a second cohort of 17 clinicians rated the videos. Main outcome measures Itemised and aggregated scores were analysed (means and 95% confidence intervals) to determine measurement characteristics and relationship to patient, consultation, clinician and assessor attributes; generalisability coefficient of aggregate score; factor analysis of items; comparison of scores between groups of patients, consultations, clinicians and assessors. Results 188 complete consultations were analysed (median per doctor = 10). 3 videos marked by any trained assessor are needed to reliably (r>0.8) assess a doctors triadic consultation skills using PCAT, 4 to assess communication with just children or parents. Performance maps to two factors – “clinical skills” and “communication behaviour”; clinicians score more highly on the former (mean (SD) 95% CI 0.52 (0.075)). There were significant differences in scores for the same skills applied to parent and child, especially between the ages of 2 and 10 years, and for information-sharing rather than relationshipbuilding skills (2-tailed significance <0.001). Conclusions The PCAT appears to be reliable, valid and feasible for the assessment of triadic consultation skills by direct observation.


Medical Education | 2008

Can a district hospital assess its doctors for re-licensure?

Jim Crossley; John McDonnell; Charlie Cooper; Pauline McAvoy; Julian Archer; Helena Davies

Contextu2002 The Chief Medical Officer’s recommendations on medical regulation in the UK suggest that National Health Service (NHS) trusts should assess their doctors and confirm whether they remain fit to practise medicine.


Medical Education | 2011

Factors that might undermine the validity of patient and multi-source feedback.

Julian Archer; Pauline McAvoy

Medical Education 2011: 45: 886–893


Patient Education and Counseling | 2011

Initial evaluation of EPSCALE, a rating scale that assesses the process of explanation and planning in the medical interview

Jonathan Silverman; Julian Archer; Susan Gillard; Rachel Howells; John M. Benson

OBJECTIVEnto evaluate the content validity, internal consistency and generalisability of EPSCALE, a new rating scale to measure communication skills in explanation and planning.nnnMETHODSncontent validity: consensus exercise and expert review. Internal consistency and generalisability: 124 clinical students undertaking 4 OSCE stations with simulated patients, with one observer (hospital specialist, GP or communication specialist) per station, during finals examinations. Internal consistency estimated by coefficient alpha, generalisability estimated by generalisability coefficient and variance components using EPSCALE.nnnRESULTSncontent validity was supported by consensus exercise and expert review. Internal consistency was high with a coefficient alpha of greater than 0.8 for all four explanation and planning stations in the finals exam. Generalisability coefficient for 4 OSCE stations was 0.50.nnnCONCLUSIONSnthis paper provides initial evidence that EPSCALE has content validity and high internal consistency when used to assess explanation and planning skills in the consultation. It defines the generalisability of this new rating scale. Further work is needed to explore the scales validity by a range of other measures.


Postgraduate Medical Journal | 2010

Republished paper: Assuring validity of multisource feedback in a national programme.

Julian Archer; Mary McGraw; Helena Davies

Objective To report the evidence for and challenges to the validity of Sheffield Peer Review Assessment Tool (SPRAT) with paediatric Specialist Registrars (SpRs) across the UK as part of Royal College of Paediatrics and Child Health workplace based assessment programme. Design Quality assurance analysis, including generalisability, of a multisource feedback questionnaire study. Setting All UK Deaneries between August 2005 and May 2006. Participants 577 year 2 and 4 Paediatric SpRs. Interventions Trainees were evaluated using SPRAT sent to clinical colleagues of their choosing. Data were analysed reporting totals, means and SD, and year groups were compared using independent t tests. A factor analysis was undertaken. Reliability was estimated using generalisability theory. Trainee and assessor demographic details were explored to try to explain variability in scores. Main outcome measures 4770 SPRAT assessments were provided about 577 paediatric SpRs. The mean scores between years were significantly different (Year 2 mean=5.08, SD=0.34, Year 4 mean=5.18, SD=0.34). A factor analysis returned a two-factor solution, clinical care and psychosocial skills. The 95% CI showed that trainees scoring ≥4.3 with nine assessors can be seen as achieving satisfactory performance with statistical confidence. Consultants marked trainees significantly lower (t=−4.52) whereas Senior House Officers and Foundation doctors scored their SpRs significantly higher (SHO t=2.06, Foundation t=2.77). Conclusions There is increasing evidence that multisource feedback (MSF) assesses two generic traits, clinical care and psychosocial skills. The validity of MSF is threatened by systematic bias, namely leniency bias and the seniority of assessors. Unregulated self-selection of assessors needs to end.

Collaboration


Dive into the Julian Archer's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

A F Mellon

City Hospitals Sunderland NHS Foundation Trust

View shared research outputs
Top Co-Authors

Avatar

Alice Miller

Peninsula College of Medicine and Dentistry

View shared research outputs
Top Co-Authors

Avatar

Chris Ricketts

Peninsula College of Medicine and Dentistry

View shared research outputs
Top Co-Authors

Avatar

Mary McGraw

University Hospitals Bristol NHS Foundation Trust

View shared research outputs
Top Co-Authors

Avatar

Jim Crossley

University of Sheffield

View shared research outputs
Top Co-Authors

Avatar

Adrian C Bateman

University Hospital Southampton NHS Foundation Trust

View shared research outputs
Top Co-Authors

Avatar

Adrian Freeman

Peninsula College of Medicine and Dentistry

View shared research outputs
Researchain Logo
Decentralizing Knowledge