Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yolanda Vazquez-Alvarez is active.

Publication


Featured researches published by Yolanda Vazquez-Alvarez.


ubiquitous computing | 2012

Auditory display design for exploration in mobile audio-augmented reality

Yolanda Vazquez-Alvarez; Ian Oakley; Stephen A. Brewster

In this paper, we compare four different auditory displays in a mobile audio-augmented reality environment (a sound garden). The auditory displays varied in the use of non-speech audio, Earcons, as auditory landmarks and 3D audio spatialization, and the goal was to test the user experience of discovery in a purely exploratory environment that included multiple simultaneous sound sources. We present quantitative and qualitative results from an initial user study conducted in the Municipal Gardens of Funchal, Madeira. Results show that spatial audio together with Earcons allowed users to explore multiple simultaneous sources and had the added benefit of increasing the level of immersion in the experience. In addition, spatial audio encouraged a more exploratory and playful response to the environment. An analysis of the participants’ logged data suggested that the level of immersion can be related to increased instances of stopping and scanning the environment, which can be quantified in terms of walking speed and head movement.


human factors in computing systems | 2011

Eyes-free multitasking: the effect of cognitive load on mobile spatial audio interfaces

Yolanda Vazquez-Alvarez; Stephen A. Brewster

As mobile devices increase in functionality, users perform more tasks when on the move. Spatial audio interfaces offer a solution for eyes-free interaction. However, such interfaces face a number of challenges when supporting multiple and simultaneous tasks, namely: 1) interference amongst multiple audio streams, and 2) the constraints of cognitive load. We present a comparative study of spatial audio techniques evaluated in a divided- and selective-attention task. A podcast was used for high cognitive load (divided-attention) and classical music for low cognitive load (selective-attention), while interacting with an audio menu. Results showed that spatial audio techniques were preferred when cognitive load was kept low, while a baseline technique using an interruptible single audio stream was significantly less preferred. Conversely, when cognitive load was increased the preferences reversed. Thus, given an appropriate task structure, spatial techniques offer a means of designing effective audio interfaces to support eyes-free mobile multitasking.


international conference on multimodal interfaces | 2011

The effect of clothing on thermal feedback perception

Martin Halvey; Graham A. Wilson; Yolanda Vazquez-Alvarez; Stephen A. Brewster; Stephen A. Hughes

Thermal feedback is a new area of research in HCI. To date, studies investigating thermal feedback for interaction have focused on virtual reality, abstract uses of thermal output or on use in highly controlled lab settings. This paper is one of the first to look at how environmental factors, in our case clothing, might affect user perception of thermal feedback and therefore usability of thermal feedback. We present a study into how well users perceive hot and cold stimuli on the hand, thigh and waist. Evaluations were carried out with cotton and nylon between the thermal stimulators and the skin. Results showed that the presence of clothing requires higher intensity thermal changes for detection but that these changes are more comfortable than direct stimulation on skin.


nordic conference on human-computer interaction | 2012

Shaking the dead: multimodal location based experiences for un-stewarded archaeological sites

David K. McGookin; Yolanda Vazquez-Alvarez; Stephen A. Brewster; Joanna Bergstrom-Lehtovirta

We consider how visits to un-stewarded historical and archaeological sites - those that are unstaffed and have few visible archaeological remains - can be augmented with multimodal interaction to create more engaging experiences. We developed and evaluated a mobile application that allowed multimodal exploration of a rural Roman fort. Sixteen primary school children used the application to explore the fort. Issues, including the influence of visual remains, were identified and compared with findings from a second study with eight users at a separate site. From these, we determined key design implications around the importance of physical space, group work and interaction with the auditory data.


acm multimedia | 2014

Face-Based Automatic Personality Perception

Noura Al Moubayed; Yolanda Vazquez-Alvarez; Alex McKay; Alessandro Vinciarelli

Automatic Personality Perception is the task of automatically predicting the personality traits people attribute to others. This work presents experiments where such a task is performed by mapping facial appearance into the Big-Five personality traits, namely Openness, Conscientiousness, Extraversion, Agreeableness and Neuroticism. The experiments are performed over the pictures of the FERET corpus, originally collected for biometrics purposes, for a total of 829 individuals. The results show that it is possible to automatically predict whether a person is perceived to be above or below median with an accuracy close to 70 percent (depending on the trait).


human factors in computing systems | 2009

Investigating background & foreground interactions using spatial audio cues

Yolanda Vazquez-Alvarez; Stephen A. Brewster

Audio is a key feedback mechanism in eyes-free and mobile computer interaction. Spatial audio, which allows us to localize a sound source in a 3D space, can offer a means of altering focus between audio streams as well as increasing the richness and differentiation of audio cues. However, the implementation of spatial audio on mobile phones is a recent development. Therefore, a calibration of this new technology is a requirement for any further spatial audio research. In this paper we report an evaluation of the spatial audio capabilities supported on a Nokia N95 8GB mobile phone. Participants were able to significantly discriminate between five audio sources on the frontal horizontal plane. Results also highlighted possible subject variation caused by earedness and handedness. We then introduce the concept of audio minimization and describe work in progress using the Nokia N95s 3D audio capability to implement and evaluate audio minimization in an eyes-free mobile environment.


human computer interaction with mobile devices and services | 2011

Can we work this out?: an evaluation of remote collaborative interaction in a mobile shared environment

Dari Trendafilov; Yolanda Vazquez-Alvarez; Saija Lemmelä; Roderick Murray-Smith

We describe a novel dynamic method for collaborative virtual environments designed for mobile devices and evaluated in a mobile context. Participants interacted in pairs remotely and through touch while walking in three different feedback conditions: 1) visual, 2) audio-tactile, 3) spatial audio-tactile. Results showed the visual baseline system provided higher shared awareness, efficiency and a strong learning effect. However, and although very challenging, the eyes-free systems still offered the ability to build joint awareness in remote collaborative environments, particularly the spatial audio one. These results help us better understand the potential of different feedback mechanisms in the design of future mobile collaborative environments.


human factors in computing systems | 2016

e-Seesaw: A Tangible, Ludic, Parent-child, Awareness System

Yingze Sun; Matthew P. Aylett; Yolanda Vazquez-Alvarez

In modern China, the pace of life is becoming faster and working pressure is increasing often leading to pressure on families and family interaction. 23 pairs of working parents and their children were asked what they saw as their main communication challenges and how they currently used communication technology to stay in touch. The mobile phone was the dominant form of communication despite being poorly rated by children as a way of enhancing a sense of connection and love. Parents and children were presented with a series of design probes to investigate how current communication technology might be supported or enhanced with a tangible and playful awareness system. One of the designs, the e-Seesaw, was selected and evaluated in a lab and home setting. Participant reaction was positive with the design provoking a novel perspective on remote parent-child interaction allowing even very young children to both initiate and control communication.


ACM Transactions on Computer-Human Interaction | 2016

Designing Interactions with Multilevel Auditory Displays in Mobile Audio-Augmented Reality

Yolanda Vazquez-Alvarez; Matthew P. Aylett; Stephen A. Brewster; Rocio von Jungenfeld; Antti Virolainen

Auditory interfaces offer a solution to the problem of effective eyes-free mobile interactions. In this article, we investigate the use of multilevel auditory displays to enable eyes-free mobile interaction with indoor location-based information in non-guided audio-augmented environments. A top-level exocentric sonification layer advertises information in a gallery-like space. A secondary interactive layer is used to evaluate three different conditions that varied in the presentation (sequential versus simultaneous) and spatialisation (non-spatialised versus egocentric/exocentric spatialisation) of multiple auditory sources. Our findings show that (1) participants spent significantly more time interacting with spatialised displays; (2) using the same design for primary and interactive secondary display (simultaneous exocentric) showed a negative impact on the user experience, an increase in workload and substantially increased participant movement; and (3) the other spatial interactive secondary display designs (simultaneous egocentric, sequential egocentric, and sequential exocentric) showed an increase in time spent stationary but no negative impact on the user experience, suggesting a more exploratory experience. A follow-up qualitative and quantitative analysis of user behaviour support these conclusions. These results provide practical guidelines for designing effective eyes-free interactions for far richer auditory soundscapes.


information technology interfaces | 2013

Evaluating speech synthesis in a mobile context: Audio presentation of Facebook, Twitter and RSS

Matthew P. Aylett; Yolanda Vazquez-Alvarez; Lynne Baillie

This paper presents an evaluation of a podcast service that aggregates data from Facebook, Twitter and RSS feeds, using speech synthesis. The service uses a novel approach to speech synthesis generation, where XML markup is used to control both the speech synthesis and the sound design of a resulting podcast. A two-phase evaluation was carried out: 1) participants listening to the podcasts on desktop computers, 2) participants listening to the podcasts while walking. Our findings show that participants preferred shorter podcasts with sound effects and background music, and were affected by the surrounding environmental noise. However, audio advertising which is part of the service did not have a significant negative effect. Another finding was that the advantage of using multiple voices for content segmentation may have been undermined by difficulties in listener adaptation. The work is part of a new approach to speech synthesis provision, where its style of rendition forms a part of the application design and it is evaluated within an application context.

Collaboration


Dive into the Yolanda Vazquez-Alvarez's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ian Oakley

Ulsan National Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Martin Halvey

University of Strathclyde

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge