Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Fiore Martin is active.

Publication


Featured researches published by Fiore Martin.


Codesign | 2015

Designing with and for people living with visual impairments: audio-tactile mock-ups, audio diaries and participatory prototyping

Oussama Metatla; Nick Bryan-Kinns; Tony Stockman; Fiore Martin

Methods used to engage users in the design process often rely on visual techniques, such as paper prototypes, to facilitate the expression and communication of design ideas. The visual nature of these tools makes them inaccessible to people living with visual impairments. In addition, while using visual means to express ideas for designing graphical interfaces is appropriate, it is harder to use them to articulate the design of non-visual displays. In this article, we present an approach to conducting participatory design with people living with visual impairments incorporating various techniques to help make the design process accessible. We reflect on the benefits and challenges that we encountered when employing these techniques in the context of designing cross-modal interactive tools.


human factors in computing systems | 2016

Tap the ShapeTones: Exploring the Effects of Crossmodal Congruence in an Audio-Visual Interface

Oussama Metatla; Nuno N. Correia; Fiore Martin; Nick Bryan-Kinns; Tony Stockman

There is growing interest in the application of crossmodal perception to interface design. However, most research has focused on task performance measures and often ignored user experience and engagement. We present an examination of crossmodal congruence in terms of performance and engagement in the context of a memory task of audio, visual, and audio-visual stimuli. Participants in a first study showed improved performance when using a visual congruent mapping that was cancelled by the addition of audio to the baseline conditions, and a subjective preference for the audio-visual stimulus that was not reflected in the objective data. Based on these findings, we designed an audio-visual memory game to examine the effects of crossmodal congruence on user experience and engagement. Results showed higher engagement levels with congruent displays with some reported preference for potential challenge and enjoyment that an incongruent display may support, particularly for increased task complexity.


Journal on Multimodal User Interfaces | 2016

Audio-haptic interfaces for digital audio workstations

Oussama Metatla; Fiore Martin; Adam Parkinson; Nick Bryan-Kinns; Tony Stockman; Atau Tanaka

We examine how auditory displays, sonification and haptic interaction design can support visually impaired sound engineers, musicians and audio production specialists access to digital audio workstation. We describe a user-centred approach that incorporates various participatory design techniques to help make the design process accessible to this population of users. We also outline the audio-haptic designs that results from this process and reflect on the benefits and challenges that we encountered when applying these techniques in the context of designing support for audio editing.


Association for Computing Machinery (ACM) | 2016

Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems

Oussama Metatla; Nuno N. Correia; Fiore Martin; Nick Bryan-Kinns; Tony Stockman

There is growing interest in the application of crossmodal perception to interface design. However, most research has focused on task performance measures and often ignored user experience and engagement. We present an examination of crossmodal congruence in terms of performance and engagement in the context of a memory task of audio, visual, and audio-visual stimuli. Participants in a first study showed improved performance when using a visual congruent mapping that was cancelled by the addition of audio to the baseline conditions, and a subjective preference for the audio-visual stimulus that was not reflected in the objective data. Based on these findings, we designed an audio-visual memory game to examine the effects of crossmodal congruence on user experience and engagement. Results showed higher engagement levels with congruent displays with some reported preference for potential challenge and enjoyment that an incongruent display may support, particularly for increased task complexity.


international conference on human-computer interaction | 2013

Activity Theory as a Tool for Identifying Design Patterns in Cross-modal Collaborative Interaction

Oussama Metatla; Nick Bryan-Kinns; Tony Stockman; Fiore Martin

This paper examines the question of how to uncover patterns from the process of designing cross-modal collaborative systems. We describe how we use activity patterns as an approach to guide this process and discuss its potential as a practical method for developing design patterns.


Archive | 2012

Cross-modal collaborative interaction between visually-impaired and sighted users in the workplace

Oussama Metatla; Nick Bryan-Kinns; Tony Stockman; Fiore Martin


BCS-HCI '12 Proceedings of the 26th Annual BCS Interaction Specialist Group Conference on People and Computers | 2012

Supporting cross-modal collaboration in the workplace

Oussama Metatla; Nick Bryan-Kinns; Tony Stockman; Fiore Martin


international conference on human-computer interaction | 2014

Non-Visual Menu Navigation: the Effect of an Audio-Tactile Display

Oussama Metatla; Fiore Martin; Tony Stockman; Nick Bryan-Kinns


Archive | 2013

Collaborative Cross-modal Interfaces

Oussama Metatla; Nick Bryan-Kinns; Tony Stockman; Fiore Martin


PeerJ | 2016

Sonification of reference markers for auditory graphs: Effects on non-visual point estimation tasks

Oussama Metatla; Nick Bryan-Kinns; Tony Stockman; Fiore Martin

Collaboration


Dive into the Fiore Martin's collaboration.

Top Co-Authors

Avatar

Nick Bryan-Kinns

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Oussama Metatla

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Tony Stockman

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge