Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Oussama Metatla is active.

Publication


Featured researches published by Oussama Metatla.


conference on computers and accessibility | 2008

Constructing relational diagrams in audio: the multiple perspective hierarchical approach

Oussama Metatla; Nick Bryan-Kinns; Tony Stockman

Although research on non-visual access to visually represented information is steadily growing, very little work has investigated how such forms of representation could be constructed through non-visual means. We discuss in this paper our approach for providing audio access to relational diagrams using multiple perspective hierarchies, and describe the design of two interaction strategies for constructing and manipulating such diagrams through this approach. A comparative study that we conducted with sighted users showed that a non-guided strategy allowed for significantly faster interaction times, and that both strategies supported similar levels of diagram comprehension. Overall, the reported study revealed that using multiple perspective hierarchies to structure the information encoded in a relational diagram enabled users construct and manipulate such information through an audio-only interface, and that combining aspects from the guided and the non-guided strategies could support greater usability.


Codesign | 2015

Designing with and for people living with visual impairments: audio-tactile mock-ups, audio diaries and participatory prototyping

Oussama Metatla; Nick Bryan-Kinns; Tony Stockman; Fiore Martin

Methods used to engage users in the design process often rely on visual techniques, such as paper prototypes, to facilitate the expression and communication of design ideas. The visual nature of these tools makes them inaccessible to people living with visual impairments. In addition, while using visual means to express ideas for designing graphical interfaces is appropriate, it is harder to use them to articulate the design of non-visual displays. In this article, we present an approach to conducting participatory design with people living with visual impairments incorporating various techniques to help make the design process accessible. We reflect on the benefits and challenges that we encountered when employing these techniques in the context of designing cross-modal interactive tools.


Journal on Multimodal User Interfaces | 2012

Interactive hierarchy-based auditory displays for accessing and manipulating relational diagrams

Oussama Metatla; Nick Bryan-Kinns; Tony Stockman

An approach to designing hierarchy-based auditory displays that supports non-visual interaction with relational diagrams is presented. The approach is motivated by an analysis of the functional and structural properties of relational diagrams in terms of their role as external representations. This analysis informs the design of a multiple perspective hierarchy-based model that captures modality independent features of a diagram when translating it into an audio accessible form. The paper outlines design lessons learnt from two user studies that were conducted to evaluate the proposed approach.


international conference on human-computer interaction | 2013

Participatory Design with Blind Users: A Scenario-Based Approach

Nuzhah Gooda Sahib; Tony Stockman; Anastasios Tombros; Oussama Metatla

Through out the design process, designers have to consider the needs of potential users. This is particularly important, but rather harder, when the designers interact with the artefact to-be-designed using different senses or devices than the users, for example, when sighted designers are designing an artefact for use by blind users. In such cases, designers have to ensure that the methods used to engage users in the design process and to communicate design ideas are accessible. In this paper, we describe a participatory approach with blind users based on the use of a scenario and the use of dialogue-simulated interaction during the development of a search interface. We achieved user engagement in two ways: firstly, we involved a blind user with knowledge of assistive technologies in the design team and secondly, we used a scenario as the basis of a dialogue between the designers and blind users to simulate interaction with the proposed search interface. Through this approach, we were able to verify requirements for the proposed search interface and blind searchers were able to provide formative feedback, to critique design plans and to propose new design ideas based on their experience and expertise with assistive technologies. In this paper, we describe the proposed scenario-based approach and examine the types of feedback gathered from its evaluation with blind users. We also critically reflect on the benefits and limitations of the approach, and discuss practical considerations in its application.


human factors in computing systems | 2016

Tap the ShapeTones: Exploring the Effects of Crossmodal Congruence in an Audio-Visual Interface

Oussama Metatla; Nuno N. Correia; Fiore Martin; Nick Bryan-Kinns; Tony Stockman

There is growing interest in the application of crossmodal perception to interface design. However, most research has focused on task performance measures and often ignored user experience and engagement. We present an examination of crossmodal congruence in terms of performance and engagement in the context of a memory task of audio, visual, and audio-visual stimuli. Participants in a first study showed improved performance when using a visual congruent mapping that was cancelled by the addition of audio to the baseline conditions, and a subjective preference for the audio-visual stimulus that was not reflected in the objective data. Based on these findings, we designed an audio-visual memory game to examine the effects of crossmodal congruence on user experience and engagement. Results showed higher engagement levels with congruent displays with some reported preference for potential challenge and enjoyment that an incongruent display may support, particularly for increased task complexity.


Journal on Multimodal User Interfaces | 2016

Audio-haptic interfaces for digital audio workstations

Oussama Metatla; Fiore Martin; Adam Parkinson; Nick Bryan-Kinns; Tony Stockman; Atau Tanaka

We examine how auditory displays, sonification and haptic interaction design can support visually impaired sound engineers, musicians and audio production specialists access to digital audio workstation. We describe a user-centred approach that incorporates various participatory design techniques to help make the design process accessible to this population of users. We also outline the audio-haptic designs that results from this process and reflect on the benefits and challenges that we encountered when applying these techniques in the context of designing support for audio editing.


Science Advances | 2018

Sampling molecular conformations and dynamics in a multiuser virtual reality framework

Michael O’Connor; Helen M. Deeks; Edward Dawn; Oussama Metatla; Anne Roudaut; Matthew Sutton; Lisa May Thomas; Becca Rose Glowacki; Rebecca Sage; Philip Tew; Mark Wonnacott; Phil Bates; Adrian J. Mulholland; David R. Glowacki

VR combined with cloud computing enables surgical manipulation of real-time molecular simulations, accelerating 3D research tasks. We describe a framework for interactive molecular dynamics in a multiuser virtual reality (VR) environment, combining rigorous cloud-mounted atomistic physics simulations with commodity VR hardware, which we have made accessible to readers (see isci.itch.io/nsb-imd). It allows users to visualize and sample, with atomic-level precision, the structures and dynamics of complex molecular structures “on the fly” and to interact with other users in the same virtual environment. A series of controlled studies, in which participants were tasked with a range of molecular manipulation goals (threading methane through a nanotube, changing helical screw sense, and tying a protein knot), quantitatively demonstrate that users within the interactive VR environment can complete sophisticated molecular modeling tasks more quickly than they can using conventional interfaces, especially for molecular pathways and structural transitions whose conformational choreographies are intrinsically three-dimensional. This framework should accelerate progress in nanoscale molecular engineering areas including conformational mapping, drug development, synthetic biology, and catalyst design. More broadly, our findings highlight the potential of VR in scientific domains where three-dimensional dynamics matter, spanning research and education.


interaction design and children | 2018

Multisensory storytelling: a co-design study with children with mixed visual abilities

Clare Cullen; Oussama Metatla

This paper presents the preliminary findings of a co-design study with children with mixed visual abilities to create a multisensory joint storytelling platform. Storytelling is a valuable way for children to express their imagination and creativity, and can be used as tool for inclusive learning. Children with visual impairments are typically educated in mainstream schools, and often encounter barriers to learning, particularly in group settings. To address some of these issues, we have been working with a group of children with visual impairments, their Teaching Assistants (TAs), and sighted friends, to design and develop multisensory storytelling technologies. This paper presents the findings of the first five design sessions. We also present the outcomes and challenges of working with mixed stakeholder, mixed visual-ability groups in participatory design.


human factors in computing systems | 2018

An Initial Investigation into Non-visual Code Structure Overview Through Speech, Non-speech and Spearcons

Joe Hutchinson; Oussama Metatla

We investigate a novel, non-visual approach to overviewing object-oriented source code and evaluate the efficiency of different categories of sounds for the purpose of getting an overview of source code structure for a visually-impaired computer programmer. A user-study with ten sighted and three non-sighted participants compared the effectiveness of speech, non-speech and spearcons on measures of accuracy and enjoyment for the task of quickly overviewing a class file. Results showed positive implications for the use of non-speech sounds in identifying programming constructs and for aesthetic value, although the effectiveness of the other sound categories in these measurements are not ruled out. Additionally, various design choices of the application impacted results, which should be of interest to designers of auditory display, accessibility and education.


human factors in computing systems | 2018

Inclusive Education Technologies: Emerging Opportunities for People with Visual Impairments

Oussama Metatla; Marcos Serrano; Christophe Jouffrais; Anja Thieme; Shaun K. Kane; Stacy M. Branham; Emeline Brulé; Cynthia L. Bennett

Technology has become central to many activities of learning, ranging from its use in classroom education to work training, mastering a new hobby, or acquiring new skills of living. While digitally-enhanced learning tools can provide valuable access to information and personalised support, people with specific accessibility needs, such as low or no vision, can often be excluded from their use. This requires technology developers to build more inclusive designs and to offer learning experiences that can be shared by people with mixed-visual abilities. There is also scope to integrate DIY approaches and provide specialised teachers with the ability to design their own low cost educational tools, adapted to pedagogical objectives and to the variety of visual and cognitive abilities of their students. For researchers, this invites new challenges of how to best support technology adoption and its evaluation in often complex educational settings. This workshop seeks to bring together researchers and practitioners interested in accessibility and education to share best practices and lessons learnt for technology in this space; and to jointly discuss and develop future directions for the next generation design of inclusive and effective education technologies.

Collaboration


Dive into the Oussama Metatla's collaboration.

Top Co-Authors

Avatar

Tony Stockman

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Nick Bryan-Kinns

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Fiore Martin

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lila Harrar

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anastasios Tombros

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge