Louise Hull
Imperial College London
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Louise Hull.
Journal of The American College of Surgeons | 2011
Louise Hull; Sonal Arora; Eva Kassab; Roger Kneebone; Nick Sevdalis
BACKGROUND Effective teamwork is crucial for safe surgery. Failures in nontechnical and teamwork skills are frequently implicated in adverse events. The Observational Teamwork Assessment for Surgery (OTAS) tool assesses teamwork of the entire team in the operating room. Empirical testing of OTAS has yet to explore the content validity of the tool. STUDY DESIGN This was a cross-sectional observational study. Data were collected in 30 procedures by 2 trained researchers. Five teamwork behaviors were scored (ie, communication, leadership, cooperation, coordination, and monitoring) and behavior exemplar completion was recorded (phase 1). Expert operating room personnel (5 surgeons, 5 anesthesiologists, and 5 scrub nurses) assessed the content validity of the OTAS exemplar behaviors. Finally, a panel of operating room patient-safety experts refined the exemplars (phase 2). RESULTS In total, the observability (presence/absence) of 130 exemplars was assessed by 2 blinded observers in 30 general surgical cases. Observer agreement was high (Cohens κ ≥ 0.41) for 83.85% (109 of 130) of exemplar behaviors; 60.77% (79 of 130) of exemplar behaviors were observed frequently with high observer agreement. The majority of the exemplars were rated by expert operating room practitioners and an expert panel as substantial contributors to teamwork and patient safety. Based on expert consensus, 21 behavior exemplars were removed from OTAS and an additional 23 were modified. CONCLUSIONS The exemplars of OTAS demonstrated very good content validity. Taken together with recent evidence on the construct validity of the tool, these findings demonstrate that OTAS is psychometrically robust for capturing teamwork in the operating room.
Annals of Surgery | 2012
Sonal Arora; Maria Ahmed; John T. Paige; Debra Nestel; Jane Runnacles; Louise Hull; Ara Darzi; Nick Sevdalis
Objective:To identify the features of effective debriefing and to use this to develop and validate a tool for assessing such debriefings. Introduction:Simulation-based training has become an accepted means of surgical skill acquisition. A key component of this is debriefing—yet there is a paucity of research to guide best practice. Methods:Phase 1—Identification of best practice and tool development. A search of the Medline, Embase, PsycINFO, and ERIC databases identified current evidence on debriefing. End-user input was obtained through 33 semistructured interviews conducted with surgeons (n = 18) and other operating room personnel (n = 15) from 3 continents (UK, USA, Australia) using standardized qualitative methodology. An expert panel (n = 7) combined the data to create the Objective Structured Assessment of Debriefing (OSAD) tool. Phase 2—Psychometric testing. OSAD was tested for feasibility, reliability, and validity by 2 independent assessors who rated 20 debriefings following high-fidelity simulations. Results:Phase 1: 28 reports on debriefing were retrieved from the literature. Key components of an effective debriefing identified from these reports and the 33 interviews included: approach to debriefing, learning environment, learner engagement, reaction, reflection, analysis, diagnosis of strengths and areas for improvement, and application to clinical practice. Phase 2: OSAD was feasible, reliable [inter-rater ICC (intraclass correlation coefficient) = 0.88, test–retest ICC = 0.90], and face and content valid (content validity index = 0.94). Conclusions:OSAD provides an evidence-based, end-user informed approach to debriefing in surgery. By quantifying the quality of a debriefing, OSAD has the potential to identify areas for improving practice and to optimize learning during simulation-based training.
BJA: British Journal of Anaesthesia | 2012
Nick Sevdalis; Louise Hull; David J. Birnbach
The publication of To Err Is Human in the USA and An Organisation with a Memory in the UK more than a decade ago put patient safety firmly on the clinical and policy agenda. To date, however, progress in improving safety and outcomes of hospitalized patients has been slower than the authors of these reports had envisaged. Here, we first review and analyse some of the reasons for the lack of evident progress in improving patient safety across healthcare specialities. We then focus on what we believe is a critical part of the healthcare system that can contribute to safety but also to error-healthcare teams. Finally, we review team training interventions and tools available for the assessment and improvement of team performance and we offer recommendations based on the existing evidence-base that have potential to improve patient safety and outcomes in the coming decade.
Annals of Surgery | 2013
Louise Hull; Sonal Arora; Nicholas R.A. Symons; Rozh Jalil; Ara Darzi; Charles Vincent; Nick Sevdalis; Delphi Expert Consensus Panel
Objective: To develop guidelines for a faculty training program in nontechnical skill assessment in surgery. Background: Nontechnical skills in the operating room are critical for patient safety. The successful integration of these skills into workplace-based assessment is dependent upon the availability of faculty who are able to teach and assess them. At present, no guidelines exist regarding the training requirements for such faculty in surgical contexts. Methods: The development of the guidelines was carried out in several stages: stage 1—a detailed literature review on current training for nontechnical skill assessors; stage 2—semistructured interviews with a multidisciplinary panel (consisting of clinicians and psychologists/human factors specialists) of experts in surgical nontechnical skills; and stage 3—interview findings fed into an Expert Consensus Panel (ECP) Delphi approach to establish consensus regarding training requirements for faculty assessing nontechnical skills in surgery. Results: The ECP agreed that training in nontechnical skill assessment should be delivered by a multidisciplinary team consisting of clinicians and psychologists/human factors specialists. The ECP reached consensus regarding who should be targeted to be trained as faculty (including proficiency and revalidation requirements). Consensus was reached on 7 essential training program content elements (including training in providing feedback/debriefing) and 8 essential methods of evaluating the effectiveness of a “train-the-trainers” program. Conclusions: This study provides evidence-based guidelines that can be used to guide the development and evaluation of programs to educate faculty in the training and assessment of nontechnical skills. Uptake of these guidelines could accelerate the development of surgical expertise required for safe and high-quality patient care.
Annals of Surgery | 2012
Stephanie Russ; Louise Hull; Shantanu Rout; Charles Vincent; Ara Darzi; Nick Sevdalis
Objectives:To assess the feasibility of training clinical and nonclinical novice assessors to rate teamwork behavior in the operating room with short-term structured training using the observational teamwork assessment for surgery (OTAS) tool. Background:Effective teamwork is fundamental to the delivery of optimal patient care in the operating room (OR). OTAS provides a comprehensive and robust measure of teamwork in surgery. To date, assessors with a background in psychology/human factors have been shown to be able to use OTAS reliably after training. However, the feasibility of observer training over a short timescale and accessibility to the wider clinical community (ie, OTAS use by clinicians) are yet to be empirically demonstrated. Methods:Ten general surgery cases were observed and assessed using OTAS in real-time by an expert in rating OTAS behaviors (100+ cases rated) and 4 novices: 2 psychologists and 2 surgeons. Assessors were blinded to each others scores during observations. After each observation, scores were compared and discussed between expert and novice assessors in a debriefing session. Results:All novices were reliable with the expert to a acceptable degree at rating all OTAS behaviors by the end of training (intraclass correlation coefficients ≥0.68). For 3 of the 5 behaviors (communication, cooperation, and leadership), calibration improved most rapidly across the first 7 observed cases. For monitoring/situational awareness, calibration improved steadily across the 10 observed cases. For coordination, no significant improvement in calibration over time was observed because of high interrater reliability from the outset (ie, a ceiling effect). There was no significant difference between surgeons and psychologists in their calibration with the expert. Conclusions:It is feasible to train both clinicians and nonclinicians to use OTAS to assess teamwork behaviors in ORs over a short structured training period. OTAS is an accessible tool that can be used robustly (ie, reliably) by assessors from both clinical and nonclinical backgrounds.
Medical Teacher | 2010
Eva Kassab; Dominic King; Louise Hull; Sonal Arora; Nick Sevdalis; Roger Kneebone; Debra Nestel
Background: Immersive simulations can enable surgeons to learn complex sets of skills required for safe surgical practice without risk to patients. However, recruiting healthcare professionals to support surgeons training as members of an operating theatre (OT) team is challenging and resource intensive. Aim: We developed a training programme for actors to take on the role of an OT team to support validation studies in a simulated environment. This article describes the evaluation of the programme. Methods: The programme comprised of written materials, video discussion and experiential activities. Evaluation methods consisted of post-simulation interviews and questionnaires with actors and surgeons. Participants were recruited by convenience sampling. Quantitative data were analysed using descriptive statistics and interviews were analysed using thematic extraction. Results: Three actors participated in the programme. Twelve surgeons completed simulations. All data suggest that the training was successful. Actors were perceived as realistic. Suggestions were made to improve training. Conclusion: After a brief training, actors can realistically portray members of an OT team in simulations designed to support surgeon training. This article highlights factors that contributed to success and suggests improvements. Although there are limitations with the study, its findings have relevance to training and assessment that focuses on individual clinicians functioning as a member of an OT team.
BMJ Quality & Safety | 2017
Ann-Marie Howell; Elaine M. Burns; Louise Hull; Erik Mayer; Nick Sevdalis; Ara Darzi
Background Patient safety incident reporting systems (PSRS) have been established for over a decade, but uncertainty remains regarding the role that they can and ought to play in quantifying healthcare-related harm and improving care. Objective To establish international, expert consensus on the purpose of PSRS regarding monitoring and learning from incidents and developing recommendations for their future role. Methods After a scoping review of the literature, semi-structured interviews with experts in PSRS were conducted. Based on these findings, a survey-based questionnaire was developed and subsequently completed by a larger expert panel. Using a Delphi approach, consensus was reached regarding the ideal role of PSRSs. Recommendations for best practice were devised. Results Forty recommendations emerged from the Delphi procedure on the role and use of PSRS. Experts agreed reporting system should not be used as an epidemiological tool to monitor the rate of harm over time or to appraise the relative safety of hospitals. They agreed reporting is a valuable mechanism for identifying organisational safety needs. The benefit of a national system was clear with respect to medication error, device failures, hospital-acquired infections and never events as these problems often require solutions at a national level. Experts recommended training for senior healthcare professionals in incident investigation. Consensus recommendation was for hospitals to take responsibility for creating safety solutions locally that could be shared nationally. Conclusions We obtained reasonable consensus among experts on aims and specifications of PSRS. This information can be used to reflect on existing and future PSRS, and their role within the wider patient safety landscape. The role of PSRS as instruments for learning needs to be elaborated and developed further internationally.
Journal of The American College of Surgeons | 2013
Stephanie Russ; Sonal Arora; Rupert Wharton; Ana Wheelock; Louise Hull; Eshaa Sharma; Ara Darzi; Charles Vincent; Nick Sevdalis
BACKGROUND Although a number of validated tools are available for assessing nontechnical skills and teamwork in the operating room (OR), there are no tools for measuring completion of key OR tasks, which is fundamental to effective teamwork, patient safety, and OR efficiency. This study describes the development and content validation of a new tool (ie, the Metric for Evaluating Task Execution in the Operating Room) for measuring basic task completion during surgical procedures. STUDY DESIGN The content validity of 106 OR tasks was assessed using 50 real-time observations of general surgical procedures, followed by a process of expert consensus. A panel of 15 OR experts (ie, surgeons, anesthesiologists, and OR nurses) were asked to rate all tasks observed in <70% of procedures for relevance to patient safety and OR efficiency (using scientifically accepted definitions). Tasks rated highly were retained. Those perceived less relevant were removed. A second panel of patient-safety experts refined the tool to remove duplication, ensure usability, and include novel tasks. RESULTS Twenty-four of the original 106 tasks were observed in <70% of cases. Seven of these were rated highly by the OR experts for relevance to patient safety and efficiency and were retained in the Metric for Evaluating Task Execution in the Operating Room. Of the remaining 17, four were retained and 13 were removed by the patient-safety experts. In the final revision phase, an additional 23 tasks were removed and 10 new tasks added. The final tool consists of 80 OR tasks relating to well-established processes of care. CONCLUSIONS The Metric for Evaluating Task Execution in the Operating Room is easy to use and can identify specific gaps in safety and/or efficiency in OR processes. Next, we should examine its links with additional measures of OR performance, for example, patient outcomes, list cancellations/delays, and nontechnical skills.
Annals of Surgery | 2016
Maximilian Johnston; Sonal Arora; Philip H. Pucher; Yannis Reissis; Louise Hull; Huddy; Dominic King; Ara Darzi
Objective:To develop and provide validity and feasibility evidence for the QUality of Information Transfer (QUIT) tool. Background:Prompt escalation of care in the setting of patient deterioration can prevent further harm. Escalation and information transfer skills are not currently measured in surgery. Methods:This study comprised 3 phases: the development (phase 1), validation (phase 2), and feasibility analysis (phase 3) of the QUIT tool. Phase 1 involved identification of core skills needed for successful escalation of care through literature review and 33 semistructured interviews with stakeholders. Phase 2 involved the generation of validity evidence for the tool using a simulated setting. Thirty surgeons assessed a deteriorating postoperative patient in a simulated ward and escalated their care to a senior colleague. The face and content validity were assessed using a survey. Construct and concurrent validity of the tool were determined by comparing performance scores using the QUIT tool with those measured using the Situation-Background-Assessment-Recommendation (SBAR) tool. Phase 3 was conducted using direct observation of escalation scenarios on surgical wards in 2 hospitals. Results:A 7-category assessment tool was developed from phase 1 consisting of 24 items. Twenty-one of 24 items had excellent content validity (content validity index >0.8). All 7 categories and 18 of 24 (P < 0.05) items demonstrated construct validity. The correlation between the QUIT and SBAR tools used was strong indicating concurrent validity (r = 0.694, P < 0.001). Real-time scoring of escalation referrals was feasible and indicated that doctors currently have better information transfer skills than nurses when faced with a deteriorating patient. Conclusions:A validated tool to assess information transfer for deteriorating surgical patients was developed and tested using simulation and real-time clinical scenarios. It may improve the quality and safety of patient care on the surgical ward.
American Journal of Surgery | 2014
Iliana J. Harrysson; Louise Hull; Nick Sevdalis; Ara Darzi; Rajesh Aggarwal
BACKGROUND The implementation of duty-hour restrictions and a heightened awareness of patient safety has changed resident education and training. A new focus has been placed on high-yield training programs and simulation training has naturally grown to fill this need. METHODS This article discusses the development of a training framework, knowledge, skills, and attitudes, and the design of a surgical simulation curriculum. Five residents were recruited for a pilot study of the curriculum. RESULTS A successful framework for curriculum development was implemented using laparoscopic cholecystectomy as the example. The curriculum consisted of classroom and virtual reality simulation training and was completed in 3.1 to 4.8 hours. CONCLUSIONS The current curricula that have been developed for surgical education cover the breadth of a surgical residency well. This curriculum went beyond these curricula and developed a structured framework for surgical training, a method that can be applied to any procedure.