Int. J. Hum. Comput. Stud. | 2021

EyeTAP: Introducing a multimodal gaze-based technique using voice inputs with a comparative analysis of selection techniques

 
 
 

Abstract


One of the main challenges of gaze-based interactions is the ability to distinguish normal eye function from a deliberate interaction with the computer system, commonly referred to as ‘Midas touch’. In this paper we propose EyeTAP (Eye tracking point-and-select by Targeted Acoustic Pulse) a contact-free multimodal interaction method for point-and-select tasks. We evaluated the prototype in four user studies with 33 participants and found that EyeTAP is applicable in the presence of ambient noise, results in a faster movement time, and faster task completion time, and has a lower cognitive workload than voice recognition. In addition, although EyeTAP did not generally outperform the dwell-time method, it did have a lower error rate than the dwell-time in one of ∗Corresponding author Email address: [email protected] (Mohsen Parisay) Preprint submitted to International Journal of Human-Computer Studies June 3, 2021 our experiments. Our study shows that EyeTAP would be useful for users for whom physical movements are restricted or not possible due to a disability or in scenarios where contact-free interactions are necessary. Furthermore, EyeTAP has no specific requirements in terms of user interface design and therefore it can be easily integrated into existing systems.

Volume 154
Pages 102676
DOI 10.1016/j.ijhcs.2021.102676
Language English
Journal Int. J. Hum. Comput. Stud.

Full Text