Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Douglas Murphy is active.

Publication


Featured researches published by Douglas Murphy.


Advances in Health Sciences Education | 2009

The reliability of workplace-based assessment in postgraduate medical education and training: a national evaluation in general practice in the United Kingdom

Douglas Murphy; David Bruce; Stewart W. Mercer; Kevin W. Eva

To investigate the reliability and feasibility of six potential workplace-based assessment methods in general practice training: criterion audit, multi-source feedback from clinical and non-clinical colleagues, patient feedback (the CARE Measure), referral letters, significant event analysis, and video analysis of consultations. Performance of GP registrars (trainees) was evaluated with each tool to assess the reliabilities of the tools and feasibility, given raters and number of assessments needed. Participant experience of process determined by questionnaire. 171 GP registrars and their trainers, drawn from nine deaneries (representing all four countries in the UK), participated. The ability of each tool to differentiate between doctors (reliability) was assessed using generalisability theory. Decision studies were then conducted to determine the number of observations required to achieve an acceptably high reliability for “high-stakes assessment” using each instrument. Finally, descriptive statistics were used to summarise participants’ ratings of their experience using these tools. Multi-source feedback from colleagues and patient feedback on consultations emerged as the two methods most likely to offer a reliable and feasible opinion of workplace performance. Reliability co-efficients of 0.8 were attainable with 41 CARE Measure patient questionnaires and six clinical and/or five non-clinical colleagues per doctor when assessed on two occasions. For the other four methods tested, 10 or more assessors were required per doctor in order to achieve a reliable assessment, making the feasibility of their use in high-stakes assessment extremely low. Participant feedback did not raise any major concerns regarding the acceptability, feasibility, or educational impact of the tools. The combination of patient and colleague views of doctors’ performance, coupled with reliable competence measures, may offer a suitable evidence-base on which to monitor progress and completion of doctors’ training in general practice.


Medical Teacher | 2011

Providing feedback: Exploring a model (emotion, content, outcomes) for facilitating multisource feedback

Joan Sargeant; Elaine Mcnaughton; Stewart W. Mercer; Douglas Murphy; Patricia Sullivan; David Bruce

Background: Multi-source feedback (MSF) aims to raise self-awareness of performance and encourage improvement. The ECO model (emotions, content, outcome) is a three-step process developed from the counselling literature to facilitate feedback acceptance and use in MSF. Aims: The purpose of this study was to explore the acceptability, usefulness and educational impact of the model. Methods: This was a qualitative study using interviews to explore general practice (GP) trainer and trainee experiences and perceptions of the ECO facilitation model. Interviews were conducted by telephone, recorded, transcribed and analysed using a thematic framework. Results: About 13 GP trainers and trainees participated in the interviews following their MSF discussions using the ECO model. They agreed that the model was useful, simple to use and engaged trainees in reflection upon their feedback and performance. Exploring emotions and clarifying content appeared integral to accepting and using the feedback. Positive feedback was often surprising. Most trainees reported performance improvements following their MSF–ECO session. Conclusions: The model appeared acceptable and simple to use. Engaging the learner as a partner in the feedback discussion appeared effective. Further research is needed to fully understand the influence of each step in facilitating MSF acceptance and use, and to determine the impact of the ECO model alone upon performance outcomes compared to more traditional provision of MSF feedback.


BMJ Open | 2015

Bad apples or spoiled barrels? Multilevel modelling analysis of variation in high-risk prescribing in Scotland between general practitioners and between the practices they work in.

Bruce Guthrie; Peter T. Donnan; Douglas Murphy; Boikanyo Makubate; Tobias Dreischulte

Objectives Primary care high-risk prescribing causes significant harm, but it is unclear if it is largely driven by individuals (a ‘bad apple’ problem) or by practices having higher or lower risk prescribing cultures (a ‘spoiled barrel’ problem). The study aimed to examine the extent of variation in high-risk prescribing between individual prescribers and between the practices they work in. Design, setting and participants Multilevel logistic regression modelling of routine cross-sectional data from 38 Scottish general practices for 181 010 encounters between 398 general practitioners (GPs) and 26 539 patients particularly vulnerable to adverse drug events (ADEs) of non-steroidal anti-inflammatory drugs (NSAIDs) due to age, comorbidity or co-prescribing. Outcome measure Initiation of a new NSAID prescription in an encounter between GPs and eligible patients. Results A new high-risk NSAID was initiated in 1953 encounters (1.1% of encounters, 7.4% of patients). Older patients, those with more vulnerabilities to NSAID ADEs and those with polypharmacy were less likely to have a high-risk NSAID initiated, consistent with GPs generally recognising the risk of NSAIDs in eligible patients. Male GPs were more likely to initiate a high-risk NSAID than female GPs (OR 1.73, 95% CI 1.39 to 2.16). After accounting for patient characteristics, 4.2% (95% CI 2.1 to 8.3) of the variation in high-risk NSAID prescribing was attributable to variation between practices, and 14.2% (95% CI 11.4 to 17.3) to variation between GPs. Three practices had statistically higher than average high-risk prescribing, but only 15.7% of GPs with higher than average high-risk prescribing and 18.5% of patients receiving such a prescription were in these practices. Conclusions There was much more variation in high-risk prescribing between GPs than between practices, and only targeting practices with higher than average rates will miss most high-risk NSAID prescribing. Primary care prescribing safety improvement should ideally target all practices, but encourage practices to consider and act on variation between prescribers in the practice.


BMJ Open | 2016

Development and preliminary psychometric properties of the Care Experience Feedback Improvement Tool (CEFIT)

Michelle Beattie; Ashley Shepherd; William Lauder; Iain Atherton; Julie Cowie; Douglas Murphy

Objective To develop a structurally valid and reliable, yet brief measure of patient experience of hospital quality of care, the Care Experience Feedback Improvement Tool (CEFIT). Also, to examine aspects of utility of CEFIT. Background Measuring quality improvement at the clinical interface has become a necessary component of healthcare measurement and improvement plans, but the effectiveness of measuring such complexity is dependent on the purpose and utility of the instrument used. Methods CEFIT was designed from a theoretical model, derived from the literature and a content validity index (CVI) procedure. A telephone population surveyed 802 eligible participants (healthcare experience within the previous 12 months) to complete CEFIT. Internal consistency reliability was tested using Cronbachs α. Principal component analysis was conducted to examine the factor structure and determine structural validity. Quality criteria were applied to judge aspects of utility. Results CVI found a statistically significant proportion of agreement between patient and practitioner experts for CEFIT construction. 802 eligible participants answered the CEFIT questions. Cronbachs α coefficient for internal consistency indicated high reliability (0.78). Interitem (question) total correlations (0.28–0.73) were used to establish the final instrument. Principal component analysis identified one factor accounting for 57.3% variance. Quality critique rated CEFIT as fair for content validity, excellent for structural validity, good for cost, poor for acceptability and good for educational impact. Conclusions CEFIT offers a brief yet structurally sound measure of patient experience of quality of care. The briefness of the 5-item instrument arguably offers high utility in practice. Further studies are needed to explore the utility of CEFIT to provide a robust basis for feedback to local clinical teams and drive quality improvement in the provision of care experience for patients. Further development of aspects of utility is also required.


BMC Medical Education | 2015

Insightful Practice: a robust measure of medical students' professional response to feedback on their performance.

Douglas Murphy; Patricia E Aitchison; Virginia Hernandez Santiago; Peter Davey; Gary Mires; Dilip Nathwani

BackgroundHealthcare professionals need to show accountability, responsibility and appropriate response to audit feedback. Assessment of Insightful Practice (engagement, insight and appropriate action for improvement) has been shown to offer a robust system, in general practice, to identify concerns in doctors’ response to independent feedback. This study researched the system’s utility in medical undergraduates.MethodsSetting and participants: 28 fourth year medical students reflected on their performance feedback. Reflection was supported by a staff coach. Students’ portfolios were divided into two groups (n = 14). Group 1 students were assessed by three staff assessors (calibrated using group training) and Group 2 students’ portfolios were assessed by three staff assessors (un-calibrated by one-to-one training). Assessments were by blinded web-based exercise and assessors were senior Medical School staff.Design: Case series with mixed qualitative and quantitative methods. A feedback dataset was specified as (1) student-specific End-of-Block Clinical Feedback, (2) other available Medical School assessment data and, (3) an assessment of students’ identification of prescribing errors.Analysis and statistical tests: Generalisability G-theory and associated Decision D- studies were used to assess the reliability of the system and a subsequent recommendation on students’ suitability to progress training. One-to-one interviews explored participants’ experiences.Main outcome measures: The primary outcome measure was inter-rater reliability of assessment of students’ Insightful Practice. Secondary outcome measures were the reaction of participants and their self-reported behavioural change.ResultsThe method offered a feasible and highly reliable global assessment for calibrated assessors, G (inter-rater reliability) > 0.8 (two assessors), but not un-calibrated assessors G < 0.31. Calibrated assessment proved an acceptable basis to enhance feedback and identify concern in professionalism. Students reported increased awareness in teamwork and in the importance of heeding advice. Coaches reported improvement in their feedback skills and commitment to improving the quality of student feedback.ConclusionsInsightful practice offers a reliable and feasible method to evaluate medical undergraduates’ professional response to their training feedback. The piloted system offers a method to assist the early identification of students at risk and monitor, where required, the remediation of students to get their level(s) of professional response to feedback back ‘on track’.


BMC Medical Education | 2018

Learning from errors: assessing final year medical students’ reflection on safety improvement, five year cohort study

Vicki Tully; Douglas Murphy; Evridiki Fioratou; Arun Chaudhuri; James Shaw; Peter Davey

BackgroundInvestigation of real incidents has been consistently identified by expert reviews and student surveys as a potentially valuable teaching resource for medical students. The aim of this study was to adapt a published method to measure resident doctors’ reflection on quality improvement and evaluate this as an assessment tool for medical students.MethodsThe design is a cohort study. Medical students were prepared with a tutorial in team based learning format and an online Managing Incident Review course. The reliability of the modified Mayo Evaluation of Reflection on Improvement tool (mMERIT) was analysed with Generalizability G-theory. Long term sustainability of assessment of incident review with mMERIT was tested over five consecutive years.ResultsA total of 824 students have completed an incident review using 167 incidents from NHS Tayside’s online reporting system. In order to address the academic practice gap students were supervised by Senior Charge Nurses or Consultants on the wards where the incidents had been reported. Inter-rater reliability was considered sufficiently high to have one assessor for each student report. There was no evidence of a gradient in student marks across the academic year. Marks were significantly higher for students who used Section Questions to structure their reports compared with those who did not. In Year 1 of the study 21 (14%) of 153 mMERIT reports were graded as concern. All 21 of these students achieved the required standard on resubmission. Rates of resubmission were lower (3% to 7%) in subsequent years.ConclusionsWe have shown that mMERIT has high reliability with one rater. mMERIT can be used by students as part of a suite of feedback to help supplement their self-assessment on their learning needs and develop insightful practice to drive their development of quality, safety and person centred professional practice. Incident review addresses the need for workplace based learning and use of real life examples of mistakes, which has been identified by previous studies of education about patient safety in medical schools.


Medical Education | 2017

Medical appraisal: an amber zone of opportunity?

Douglas Murphy; David Bruce

1 Han J, LaMarra D, Vapiwala N. Applying lessons from social psychology to transform the culture of error disclosure. Med Educ 2017;51 (10):996–1001. 2 Perez B, Knych SA, Weaver SJ, Liberman A, Abel EM, Oetjen D, Wan TT. Understanding the barriers to physician error reporting and disclosure: a systemic approach to a systemic problem. J Patient Saf 2014;10 (1): 45–51. 3 Ajzen I. The theory of planned behaviour. Organ Behav Hum Decis Process 1991;50 (2):179–211. 4 Webb TL, Sheeran P. Does changing behavioural intentions engender behaviour change? A meta-analysis of the experimental evidence. Psychol Bull 2006;132 (2):249–68. 5 Etchegaray JM, Gallagher TH, Bell SK, Dunlap B, Thomas EJ. Error disclosure: a new domain for safety culture assessment. BMJ Qual Saf 2012;21 (7):594–9. 6 Martinez W, Lehmann LS. The ‘hidden curriculum’ and residents’ attitudes about medical error disclosure: comparison of surgical and nonsurgical residents. J Am Coll Surg 2013;217 (6):1145–50. 7 Martinez W, Lo B. Medical students’ experiences with medical errors: an analysis of medical student essays. Med Educ 2008;42 (7):733–41. 8 Howe A, Smajdor A, St€ ockl A. Towards an understanding of resilience and its relevance to medical training. Med Educ 2012;46 (4):349–56. 9 Pangallo A, Zibarras L, Patterson F. Measuring resilience in palliative care workers using the situational judgement test methodology. Med Educ 2016;50 (11):1131–42.


Systematic Reviews | 2014

Instruments to measure patient experience of health care quality in hospitals: a systematic review protocol

Michelle Beattie; William Lauder; Iain Atherton; Douglas Murphy


BMC Family Practice | 2011

The Chinese-version of the CARE Measure reliably differentiates between doctors in primary care: a cross-sectional study in Hong Kong

Stewart W. Mercer; Colman S.C. Fung; Frank Wk Chan; Fiona Yy Wong; Samuel Ys Wong; Douglas Murphy


BMC Family Practice | 2015

Measuring empathic, person-centred communication in primary care nurses: validity and reliability of the Consultation and Relational Empathy (CARE) Measure

Annemieke P. Bikker; Bridie Fitzpatrick; Douglas Murphy; Stewart W. Mercer

Collaboration


Dive into the Douglas Murphy's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ning Yu

University of Dundee

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Bruce

NHS Education for Scotland

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Iain Atherton

Edinburgh Napier University

View shared research outputs
Researchain Logo
Decentralizing Knowledge