Sandy J. J. Gould
University College London
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sandy J. J. Gould.
Journal of Experimental Psychology: Applied | 2013
Duncan P. Brumby; Anna L. Cox; Jonathan Back; Sandy J. J. Gould
Interruptions are disruptive because they take time to recover from, in the form of a resumption lag, and lead to an increase in the likelihood of errors being made. Despite an abundance of work investigating the effect of interruptions on routine task performance, little is known about whether there is a link between how quickly a task is resumed following an interruption (i.e., the duration of the postinterruption resumption lag) and the likelihood that an error is made. Two experiments are reported in which participants were interrupted by a cognitively demanding secondary mental arithmetic task while working on a routine sequential data-entry task. In Experiment 1 the time-cost of making an error on the primary task was varied between conditions. When errors were associated with a high time-cost penalty, participants made fewer errors and resumed the primary task more slowly than when errors were associated with a low time-cost penalty. In Experiment 2 participants were prohibited from resuming the primary task quickly by a 10-s system lockout period following the completion of the interrupting task. This lockout period led to a significant reduction in resumption errors because the lockout prohibited fast, inaccurate task resumptions. Taken together, our results suggest that longer resumption lags following an interruption are beneficial in terms of reducing the likelihood of errors being made. We discuss the practical implications of how systems might be designed to encourage more reflective task resumption behavior in situations where interruptions are commonplace. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 2015
Christian P. Janssen; Sandy J. J. Gould; Simon Y. W. Li; Duncan P. Brumby; Anna L. Cox
Multitasking and interruptions have been studied using a variety of methods in multiple fields (e.g., HCI, cognitive science, computer science, and social sciences). This diversity brings many complementary insights. However, it also challenges researchers to understand how seemingly disparate ideas can best be integrated to further theory and to inform the design of interactive systems. There is therefore a need for a platform to discuss how different approaches to understanding multitasking and interruptions can be combined to provide insights that are more than the sum of their parts. In this article we argue for the necessity of an integrative approach. As part of this argument we provide an overview of articles in this special issue on multitasking and interruptions. These articles showcase the variety of methods currently used to study multitasking and interruptions. It is clear that there are many challenges to studying multitasking and interruptions from different perspectives and using different techniques. We advance a six-point research agenda for the future of multi-method research on this important and timely topic.
In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting. (pp. 149 -153). Sage (2013) | 2013
Sandy J. J. Gould; Duncan P. Brumby; Anna L. Cox
Interruptions cause slower, more error prone performance. Research suggests these disruptive effects are mitigated when interruptions are relevant to the task at hand. However, previous work has usually defined relevance as the degree of similarity between the content of interruptions and tasks. Using a lab-based experiment, we investigated the extent to which memory effects should be considered when assessing the relevance of an interruption. Participants performed a routine data-entry task during which they were interrupted. We found that when participants were interrupted between subtasks, reinforcement and interference effects meant that relevance had a significant effect on interruption disruptiveness. However, this effect was not observed when participants were interrupted within subtasks. These results suggest that interruption relevance is contingent on the contents of working memory during an interruption and that interruption management systems could be improved by modelling potential interfering and reinforcing effects of incoming interruptions.
human factors in computing systems | 2012
Sandy J. J. Gould; Duncan P. Brumby; Anna L. Cox; Victor M. Gonzalez; Dario D. Salvucci; Niels Taatgen
Within the CHI community there has been sustained interest in interruptions and multitasking behaviour. Research in the area falls into two broad categories: the micro world of perception and cognition; and the macro world of organisations, systems and long-term planning. Although both kinds of research have generated insights into behaviour, the data generated by the two kinds of research have been effectively incommensurable. Designing safer and more efficient interactions in interrupted and multitasking environments requires that researchers in the area attempt to bridge the gap between these worlds. This SIG aims to stimulate discussion of the tools and methods we need as a community in order to further our understanding of interruptions and multitasking.
human factors in computing systems | 2013
Sarah Wiseman; Anna L. Cox; Duncan P. Brumby; Sandy J. J. Gould; Sarah O'Carroll
Number entry is a common task in many domains. In safety-critical environments such as air traffic control or on hospital wards, incorrect number entry can have serious harmful consequences. Research has investigated how interface designs can help prevent users from making number entry errors. In this paper, we present an experimental evaluation of two possible interface designs aimed at helping users detect number entry errors using the idea of a checksum: an additional (redundant) number that is related to the to-be-entered numbers in such a way that it is sufficient to verify the correctness of the checksum, as opposed to checking each of the entered numbers. The first interface requires users to check their own work with the help of the checksum; the second requires the user to enter the checksum along with the other numbers so that the system can do the checking. In each case, two numbers needed to be entered, while the third number served as a checksum. With the first interface, users caught only 36% of their errors. The second interface resulted in all errors being caught, but the need to enter the checksum increased entry time by 46%. When participants were allowed to choose between the two interfaces, they chose the second interface in only 12% of the cases. Although these results cannot be generalized to other specific contexts, the results illustrate the strengths and weaknesses of each way of using checksums to catch number entry errors. Hence our study can serve as a starting point for efforts to improve each method.
human computer interaction with mobile devices and services | 2016
Jacob M. Rigby; Duncan P. Brumby; Anna L. Cox; Sandy J. J. Gould
Film and television content is moving out of the living room and onto mobile devices - viewers are now watching when and where it suits them, on devices of differing sizes. This freedom is convenient, but could lead to differing experiences across devices. Larger screens are often believed to be favourable, e.g. to watch films or sporting events. This is partially supported in the literature, which shows that larger screens lead to greater presence and more intense physiological responses. However, a more broadly-defined measure of experience, such as that of immersion from computer games research, has not been studied. In this study, 19 participants watched content on three different screens and reported their immersion level via questionnaire. Results showed that the 4.5-inch phone screen elicited lower immersion scores when compared to the 13-inch laptop and 30-inch monitor, but there was no difference when comparing the two larger screens. This suggests that very small screens lead to reduced immersion, but after a certain size the effect is less pronounced.
human factors in computing systems | 2016
Sandy J. J. Gould; Anna L. Cox; Duncan P. Brumby; Alice Wickersham
Data-entry is a common activity that is usually performed accurately. When errors do occur though, people are poor at spotting them even if they are told to check their input. We considered whether making people pause for a brief moment before confirming their input would make them more likely to check it. We ran a lab experiment to test this idea. We found that task lockouts encouraged checking. Longer lockout durations made checking more likely. We ran a second experiment on a crowdsourcing platform to find out whether lockouts would still be effective in a less controlled setting. We discovered that longer lockouts induced workers to switch to other activities. This made the lockouts less effective. To be useful in practice, the duration of lockouts needs to be carefully calibrated. If lockouts are too brief they will not encourage checking. If they are too long they will induce switching.
human factors in computing systems | 2015
Sandy J. J. Gould; Anna L. Cox; Duncan P. Brumby
Paid crowdsourcing has established itself as a useful way of getting work done. The availability of large, responsive pools of workers means that low quality work can often be treated as noise and dealt with through standard data processing techniques. This approach is not practical in all scenarios though, so efforts have been made to stop poor performance occurring by preventing satisficing behaviours that can compromise result quality. In this paper we test an intervention -- a task lockout -- designed to prevent satisficing behaviour in a simple data-entry task on Amazon Mechanical Turk. Our results show that workers are highly adaptable: when faced with the intervention they develop workaround strategies, allocating their time elsewhere during lockout periods. This suggests that more subtle techniques may be required to substantially influence worker behaviour.
acm international conference on interactive experiences for tv and online video | 2017
Jacob M. Rigby; Duncan P. Brumby; Sandy J. J. Gould; Anna L. Cox
Increasingly people interact with their mobile devices while watching television. We evolve an understanding of this kind of everyday media multitasking behaviour through an analysis of video data. In our study, four households were recorded watching television over three evenings. We analysed 55 hours of footage in which participants were watching the TV. We found that mobile device habits were highly variable between participants during this time, ranging from 0% to 23% of the time that the TV was on. To help us understand this variability, participants completed the Media Multitasking Index (MMI) questionnaire. Results showed that participants with a higher MMI score used their mobile device more while watching TV at home. We also saw evidence that the TV was being used as a hub in the home: multiple people were often present when the time the TV was on, providing a background for other household activities. We argue that video analysis can give valuable insights into media multitasking in the home.
ACM Transactions on Computer-Human Interaction | 2016
Sandy J. J. Gould; Anna L. Cox; Duncan P. Brumby
Obtaining high-quality data from crowds can be difficult if contributors do not give tasks sufficient attention. Attention checks are often used to mitigate this problem, but, because the roots of inattention are poorly understood, checks often compel attentive contributors to complete unnecessary work. We investigated a potential source of inattentiveness during crowdwork: multitasking. We found that workers switched to other tasks every 5 minutes, on average. There were indications that increasing switch frequency negatively affected performance. To address this, we tested an intervention that encouraged workers to stay focused on our task after multitasking was detected. We found that our intervention reduced the frequency of task switching. It also improves on existing attention checks because it does not place additional demands on workers who are already focused. Our approach shows that crowds can help to overcome some of the limitations of laboratory studies by affording access to naturalistic multitasking behavior.