Eve E. Hoggan
University of Glasgow
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Eve E. Hoggan.
human factors in computing systems | 2008
Eve E. Hoggan; Stephen A. Brewster; Jody Johnston
This paper presents a study of finger-based text entry for mobile devices with touchscreens. Many devices are now coming to market that have no physical keyboards (the Apple iPhone being a very popular example). Touchscreen keyboards lack any tactile feedback and this may cause problems for entering text and phone numbers. We ran an experiment to compare devices with a physical keyboard, a standard touchscreen and a touchscreen with tactile feedback added. We tested this in both static and mobile environments. The results showed that the addition of tactile feedback to the touchscreen significantly improved finger-based text entry, bringing it close to the performance of a real physical keyboard. A second experiment showed that higher specification tactile actuators could improve performance even further. The results suggest that manufacturers should use tactile feedback in their touchscreen devices to regain some of the feeling lost when interacting on a touchscreen with a finger.
international conference on haptic and audio interaction design | 2007
Eve E. Hoggan; Sohail Anwar; Stephen A. Brewster
The potential of using the sense of touch to communicate information in mobile devices is receiving more attention because of the limitations of graphical displays in such situations. However, most applications only use a single actuator to present vibrotactile information. In an effort to create richer tactile feedback and mobile applications that make use of the entire hand and multiple fingers as opposed to a single fingertip, this paper presents the results of two experiments investigating the perception and application of multiactuator tactile displays situated on a mobile device. The results of these experiments show that an identification rate of over 87% can be achieved when two dimensions of information are encoded in Tactons using rhythm and location. They also show that location produces 100% recognition rates when using actuators situated on the mobile device at the lower thumb, upper thumb, index finger and ring finger. This work demonstrates that it is possible to communicate information through four locations using multiple actuators situated on a mobile device when non-visual information is required.
international conference on multimodal interfaces | 2007
Eve E. Hoggan; Stephen A. Brewster
This paper reports an experiment into the design of crossmodal icons which can provide an alternative form of output for mobile devices using audio and tactile modalities to communicate information. A complete set of crossmodal icons was created by encoding three dimensions of information in three crossmodal auditory/tactile parameters. Earcons were used for the audio and Tactons for the tactile crossmodal icons. The experiment investigated absolute identification of audio and tactile crossmodal icons when a user is trained in one modality and tested in the other (and given no training in the other modality) to see if knowledge could be transferred between modalities. We also compared performance when users were static and mobile to see any effects that mobility might have on recognition of the cues. The results showed that if participants were trained in sound with Earcons and then tested with the same messages presented via Tactons they could recognize 85% of messages when stationary and 76% when mobile. When trained with Tactons and tested with Earcons participants could accurately recognize 76.5% of messages when stationary and 71% of messages when mobile. These results suggest that participants can recognize and understand a message in a different modality very effectively. These results will aid designers of mobile displays in creating effective crossmodal cues which require minimal training for users and can provide alternative presentation modalities through which information may be presented if the context requires.
human factors in computing systems | 2009
Eve E. Hoggan; Andrew Crossan; Stephen A. Brewster; Topi Kaaresoja
When designing interfaces for mobile devices it is import-ant to take into account the variety of contexts of use. We present a study that examines how changing noise and dis-turbance in the environment affects user performance in a touchscreen typing task with the interface being presented through visual only, visual and tactile, or visual and audio feedback. The aim of the study is to show at what exact environmental levels audio or tactile feedback become inef-fective. The results show significant decreases in perform-ance for audio feedback at levels of 94dB and above as well as decreases in performance for tactile feedback at vibration levels of 9.18g/s. These results suggest that at these levels, feedback should be presented by a different modality. These findings will allow designers to take advantage of sensor enabled mobile devices to adapt the provided feed-back to the users current context.
human factors in computing systems | 2007
Eve E. Hoggan; Stephen A. Brewster
Tactons (tactile icons) are structured vibrotactile messages which can be used for non-visual information presentation. Information can be encoded in a set of Tactons by manipulating parameters available in the tactile domain. One limitation is the number of available usable parameters and research is ongoing to find further effective ones. This paper reports an experiment investigating different techniques (amplitude modulation, frequency, and waveform) for creating texture as a parameter for use in Tacton design. The results of this experiment show recognition rates of 94% for waveform, 81% for frequency, and 61% for amplitude modulation, indicating that a more effective way to create Tactons using the texture parameter is to employ different waveforms to represent roughness. These results will aid designers in creating more effective and usable Tactons.
international conference on multimodal interfaces | 2008
Eve E. Hoggan; Topi Kaaresoja; Pauli Laitinen; Stephen A. Brewster
Our research considers the following question: how can visual, audio and tactile feedback be combined in a congruent manner for use with touchscreen graphical widgets? For example, if a touchscreen display presents different styles of visual buttons, what should each of those buttons feel and sound like? This paper presents the results of an experiment conducted to investigate methods of congruently combining visual and combined audio/tactile feedback by manipulating the different parameters of each modality. The results indicate trends with individual visual parameters such as shape, size and height being combined congruently with audio/tactile parameters such as texture, duration and different actuator technologies. We draw further on the experiment results using individual quality ratings to evaluate the perceived quality of our touchscreen buttons then reveal a correlation between perceived quality and crossmodal congruence. The results of this research will enable mobile touchscreen UI designers to create realistic, congruent buttons by selecting the most appropriate audio and tactile counterparts of visual button styles.
international conference on multimodal interfaces | 2009
Eve E. Hoggan; Roope Raisamo; Stephen A. Brewster
We report the results of a study focusing on the meanings that can be conveyed by audio and tactile icons. Our research considers the following question: how can audio and tactile icons be designed to optimise congruence between crossmodal feedback and the type of information this feedback is intended to convey? For example, if we have a set of system warnings, confirmations, progress up-dates and errors: what audio and tactile representations best match the information or type of message? Is one modality more appropriate at presenting certain types of information than the other modality? The results of this study indicate that certain parameters of the audio and tactile modalities such as rhythm, texture and tempo play an important role in the creation of congruent sets of feedback when given a specific type of information to transmit. We argue that a combination of audio or tactile parameters derived from our results allows the same type of information to be derived through touch and sound with an intuitive match to the content of the message.
human factors in computing systems | 2010
Eve E. Hoggan; Stephen A. Brewster
We report the results of an exploratory 8-day field study of CrossTrainer: a mobile game with crossmodal audio and tactile feedback. Our research focuses on the longitudinal effects on performance with audio and tactile feedback, the impact of context such as location and situation on performance and personal modality preference. The results of this study indicate that crossmodal feedback can aid users in entering answers quickly and accurately using a variety of different widgets. Our study shows that there are times when audio is more appropriate than tactile and vice versa and for this reason devices should support both tactile and audio feedback to cover the widest range of environments, user preference, locations and tasks.
human factors in computing systems | 2006
Eve E. Hoggan; Stephen A. Brewster
This paper describes a novel form of display using crossmodal output. A crossmodal icon is an abstract icon that can be instantiated in one of two equivalent forms (auditory or tactile). These can be used in interfaces as a means of non-visual output. This paper discusses how crossmodal icons can be constructed and the potential benefits they bring to mobile human computer interfaces.
nordic conference on human-computer interaction | 2006
Eve E. Hoggan; Stephen A. Brewster
This paper describes an alternative form of interaction for mobile devices using crossmodal output. The aim of our work is to investigate the equivalence of audio and tactile displays so that the same messages can be presented in one form or another. Initial experiments show that spatial location can be perceived as equivalent in both the auditory and tactile modalities Results show that participants are able to map presented 3D audio positions to tactile body positions on the waist most effectively when mobile and that there are significantly more errors made when using the ankle or wrist. This paper compares the results from both a static and mobile experiment on crossmodal spatial location and outlines the most effective ways to use this crossmodal output in a mobile context.