Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kate Goddard is active.

Publication


Featured researches published by Kate Goddard.


Journal of the American Medical Informatics Association | 2012

Automation bias: a systematic review of frequency, effect mediators, and mitigators

Kate Goddard; Abdul V. Roudsari; Jeremy C. Wyatt

Automation bias (AB)--the tendency to over-rely on automation--has been studied in various academic fields. Clinical decision support systems (CDSS) aim to benefit the clinical decision-making process. Although most research shows overall improved performance with use, there is often a failure to recognize the new errors that CDSS can introduce. With a focus on healthcare, a systematic review of the literature from a variety of research fields has been carried out, assessing the frequency and severity of AB, the effect mediators, and interventions potentially mitigating this effect. This is discussed alongside automation-induced complacency, or insufficient monitoring of automation output. A mix of subject specific and freetext terms around the themes of automation, human-automation interaction, and task performance and error were used to search article databases. Of 13 821 retrieved papers, 74 met the inclusion criteria. User factors such as cognitive style, decision support systems (DSS), and task specific experience mediated AB, as did attitudinal driving factors such as trust and confidence. Environmental mediators included workload, task complexity, and time constraint, which pressurized cognitive resources. Mitigators of AB included implementation factors such as training and emphasizing user accountability, and DSS design factors such as the position of advice on the screen, updated confidence levels attached to DSS output, and the provision of information versus recommendation. By uncovering the mechanisms by which AB operates, this review aims to help optimize the clinical decision-making process for CDSS developers and healthcare practitioners.


International Journal of Medical Informatics | 2014

Automation bias: Empirical results assessing influencing factors

Kate Goddard; Abdul V. Roudsari; Jeremy C. Wyatt

OBJECTIVE To investigate the rate of automation bias - the propensity of people to over rely on automated advice and the factors associated with it. Tested factors were attitudinal - trust and confidence, non-attitudinal - decision support experience and clinical experience, and environmental - task difficulty. The paradigm of simulated decision support advice within a prescribing context was used. DESIGN The study employed within participant before-after design, whereby 26 UK NHS General Practitioners were shown 20 hypothetical prescribing scenarios with prevalidated correct and incorrect answers - advice was incorrect in 6 scenarios. They were asked to prescribe for each case, followed by being shown simulated advice. Participants were then asked whether they wished to change their prescription, and the post-advice prescription was recorded. MEASUREMENTS Rate of overall decision switching was captured. Automation bias was measured by negative consultations - correct to incorrect prescription switching. RESULTS Participants changed prescriptions in 22.5% of scenarios. The pre-advice accuracy rate of the clinicians was 50.38%, which improved to 58.27% post-advice. The CDSS improved the decision accuracy in 13.1% of prescribing cases. The rate of automation bias, as measured by decision switches from correct pre-advice, to incorrect post-advice was 5.2% of all cases - a net improvement of 8%. More immediate factors such as trust in the specific CDSS, decision confidence, and task difficulty influenced rate of decision switching. Lower clinical experience was associated with more decision switching. Age, DSS experience and trust in CDSS generally were not significantly associated with decision switching. CONCLUSIONS This study adds to the literature surrounding automation bias in terms of its potential frequency and influencing factors.


Journal of Experimental Psychology: Learning, Memory and Cognition | 2015

Time to decide? Simplicity and congruity in comparative judgment

Caren A. Frosch; Rachel McCloy; C. Philip Beaman; Kate Goddard

What is the relationship between magnitude judgments relying on directly available characteristics versus probabilistic cues? Question frame was manipulated in a comparative judgment task previously assumed to involve inference across a probabilistic mental model (e.g., “Which city is largest”—the “larger” question—vs. “Which city is smallest”—the “smaller” question). Participants identified either the largest or smallest city (Experiments 1a and 2) or the richest or poorest person (Experiment 1b) in a 3-alternative forced-choice (3-AFC) task (Experiment 1) or a 2-AFC task (Experiment 2). Response times revealed an interaction between question frame and the number of options recognized. When participants were asked the smaller question, response times were shorter when none of the options were recognized. The opposite pattern was found when participants were asked the larger question: response time was shorter when all options were recognized. These task–stimuli congruity results in judgment under uncertainty are consistent with, and predicted by, theories of magnitude comparison, which make use of deductive inferences from declarative knowledge.


electronic healthcare | 2008

NHS Blood Tracking Pilot: City University Evaluation Project

Kate Goddard; Omid Shabestari; Juan Adriano; Jonathan Kay; Abdul V. Roudsari

Automation of healthcare processes is an emergent theme in the drive to increase patient safety. The Mayday Hospital has been chosen as the pilot site for the implementation of the Electronic Clinical Transfusion Management System to track blood from the point of ordering to the final transfusion. The Centre for Health Informatics at City University is carrying out an independent evaluation of the system implementation using a variety of methodologies to both formatively inform the implementation process and summatively provide an account of the lessons learned for future implementations.


Journal of Experimental Psychology: Learning, Memory and Cognition | 2010

Fast and Frugal Framing Effects

Rachel McCloy; Charles Philip Beaman; Caren A. Frosch; Kate Goddard


Studies in health technology and informatics | 2011

Automation bias – a hidden issue for clinical decision support system use

Kate Goddard; Abdul V. Roudsari; Jeremy C. Wyatt


Proceedings of the Annual Meeting of the Cognitive Science Society | 2006

Rich and Famous: Recognition-based judgment in the Sunday Times Rich List

C. Philip Beaman; Kate Goddard; Rachel McCloy


Developmental Science | 2011

Tracking speakers' false beliefs: is theory of mind available earlier for word learning?

Carmel Houston-Price; Kate Goddard; Catherine Séclier; Sally C. Grant; Caitlin J.B. Reid; Laura E. Boyden; Rhiannon Williams


Studies in health technology and informatics | 2011

Decision support and automation bias: methodology and preliminary results of a systematic review.

Kate Goddard; Abdul V. Roudsari; Jeremy C. Wyatt


Studies in health technology and informatics | 2011

Evaluation of alert-based monitoring in a computerised blood transfusion management system.

Omid Shabestari; Philip Gooch; Kate Goddard; Kamran Golchin; Jonathan Kay; Abdul V. Roudsari

Collaboration


Dive into the Kate Goddard's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jeremy C. Wyatt

University of Southampton

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge