Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marcos Serrano is active.

Publication


Featured researches published by Marcos Serrano.


human factors in computing systems | 2008

The openinterface framework: a tool for multimodal interaction.

Marcos Serrano; Laurence Nigay; Jean-Yves Lionel Lawson; Andrew Ramsay; Roderick Murray-Smith; Sebastian Denef

The area of multimodal interaction has expanded rapidly. However, the implementation of multimodal systems still remains a difficult task. Addressing this problem, we describe the OpenInterface (OI) framework, a component-based tool for rapidly developing multimodal input interfaces. The OI underlying conceptual component model includes both generic and tailored components. In addition, to enable the rapid exploration of the multimodal design space for a given system, we need to capitalize on past experiences and include a large set of multimodal interaction techniques, their specifications and documentations. In this work-in-progress report, we present the current state of the OI framework and the two exploratory test-beds developed using the OpenInterface Interaction Development Environment.


human factors in computing systems | 2014

Exploring the use of hand-to-face input for interacting with head-worn displays

Marcos Serrano; Barrett Ens; Pourang Irani

We propose the use of Hand-to-Face input, a method to interact with head-worn displays (HWDs) that involves contact with the face. We explore Hand-to-Face interaction to find suitable techniques for common mobile tasks. We evaluate this form of interaction with document navigation tasks and examine its social acceptability. In a first study, users identify the cheek and forehead as predominant areas for interaction and agree on gestures for tasks involving continuous input, such as document navigation. These results guide the design of several Hand-to-Face navigation techniques and reveal that gestures performed on the cheek are more efficient and less tiring than interactions directly on the HWD. Initial results on the social acceptability of Hand-to-Face input allow us to further refine our design choices, and reveal unforeseen results: some gestures are considered culturally inappropriate and gender plays a role in selection of specific Hand-to-Face interactions. From our overall results, we provide a set of guidelines for developing effective Hand-to-Face interaction techniques.


designing interactive systems | 2012

Movement qualities as interaction modality

Sarah Fdili Alaoui; Baptiste Caramiaux; Marcos Serrano; Frédéric Bevilacqua

In this paper, we explore the use of movement qualities as interaction modality. The notion of movement qualities is widely used in dance practice and can be understood as how the movement is performed, independently of its specific trajectory in space. We implemented our approach in the context of an artistic installation called A light touch. This installation invites the participant to interact with a moving light spot reacting to the hand movement qualities. We conducted a user experiment that showed that such an interaction based on movement qualities tends to enhance the user experience favouring explorative and expressive usage.


human-computer interaction with mobile devices and services | 2006

Multimodal interaction on mobile phones: development and evaluation using ACICARE

Marcos Serrano; Laurence Nigay; Rachel Demumieux; Jérôme Descos; Patrick Losquin

The development and the evaluation of multimodal interactive systems on mobile phones remains a difficult task. In this paper we address this problem by describing a component-based approach, called ACICARE, for developing and evaluating multimodal interfaces on mobile phones. ACICARE is dedicated to the overall iterative design process of mobile multimodal interfaces, which consists of cycles of designing, prototyping and evaluation. ACICARE is based on two complementary tools that are combined: ICARE and ACIDU. ICARE is a component-based platform for rapidly developing multimodal interfaces. We adapted the ICARE components to run on mobile phones and we connected them to ACIDU, a probe that gathers customers usage on mobile phones. By reusing and assembling components, ACICARE enables the rapid development of multimodal interfaces as well as the automatic capture of multimodal usage for in-field evaluations. We illustrate ACICARE using our contact manager system, a multimodal system running on the SPV c500 mobile phone.


human computer interaction with mobile devices and services | 2014

Exploring smartphone-based interaction with overview+detail interfaces on 3D public displays

Louis-Pierre Bergé; Marcos Serrano; Gary Perelman; Emmanuel Dubois

As public displays integrate 3D content, Overview+Detail (O+D) interfaces on mobile devices will allow for a personal 3D exploration of the public display. In this paper we study the properties of mobile-based interaction with O+D interfaces on 3D public displays. We evaluate three types of existing interaction techniques for the 3D translation of the Detail view: touchscreen input, mid-air movement of the mobile device (Mid-Air Phone) and mid-air movement of the hand around the device (Mid-Air Hand). In a first experiment, we compare the performance and user preference of these three types of techniques with previous training. In a second experiment, we study how well the two mid-air techniques perform with no training or human help to imitate usual conditions in public context. Results reveal that Mid-Air Phone and Hand perform best with training. However, without training or human help Mid-Air Phone is more intuitive and performs better on the first trial. Interestingly, on both experiments users preferred Mid-Air Hand. We conclude with a discussion on the use of mobile devices to interact with public O+D interfaces.


human factors in computing systems | 2016

Tangible Reels: Construction and Exploration of Tangible Maps by Visually Impaired Users

Julie Ducasse; Marc J.-M. Macé; Marcos Serrano; Christophe Jouffrais

Maps are essential in everyday life, but inherently inaccessible to visually impaired users. They must be transcribed to non-editable tactile graphics, or rendered on very expensive shape changing displays. To tackle these issues, we developed a tangible tabletop interface that enables visually impaired users to build tangible maps on their own, using a new type of physical icon called Tangible Reels. Tangible Reels are composed of a sucker pad that ensures stability, with a retractable reel that renders digital lines tangible. In order to construct a map, audio instructions guide the user to precisely place Tangible Reels onto the table and create links between them. During subsequent exploration, the device provides the names of the points and lines that the user touches. A pre-study confirmed that Tangible Reels are stable and easy to manipulate, and that visually impaired users can understand maps that are built with them. A follow-up experiment validated that the designed system, including non-visual interactions, enables visually impaired participants to quickly build and explore maps of various complexities.


human factors in computing systems | 2015

The Roly-Poly Mouse: Designing a Rolling Input Device Unifying 2D and 3D Interaction

Gary Perelman; Marcos Serrano; Mathieu Raynal; Celia Picard; Moustapha Derras; Emmanuel Dubois

We present the design and evaluation of the Roly-Poly Mouse (RPM), a rolling input device that combines the advantages of the mouse (position displacement) and of 3D devices (roll and rotation) to unify 2D and 3D interaction. Our first study explores RPM gesture amplitude and stability for different upper shapes (Hemispherical, Convex) and hand postures. 8 roll directions can be performed precisely and their amplitude is larger on Hemispherical RPM. As minor rolls affect translation, we propose a roll correction algorithm to support stable 2D pointing with RPM. We propose the use of compound gestures for 3D pointing and docking, and evaluate them against a commercial 3D device, the SpaceMouse. Our studies reveal that RPM performs 31% faster than the SpaceMouse for 3D pointing and equivalently for 3D rotation. Finally, we present a proof-of-concept integrated RPM prototype along with discussion on the various technical challenges to overcome to build a final integrated version of RPM.


human factors in computing systems | 2011

From dance to touch: movement qualities for interaction design

Sarah Fdili Alaoui; Baptiste Caramiaux; Marcos Serrano

In this paper we address the question of extending user experience in large scale tactile displays. Our contribution is a non task-oriented interaction technique based on modern dance for the creation of aesthetically pleasant large scale tactile interfaces. This approach is based on dance movement qualities applied to touch interaction allowing for natural gestures in large touch displays. We used specific movements from a choreographic glossary and developed a robust movement quality recognition process. To illustrate our approach, we propose a media installation called A light touch, where touch is used to control a light spot reacting to movement qualities.


international conference on multimodal interfaces | 2008

A three-dimensional characterization space of software components for rapidly developing multimodal interfaces

Marcos Serrano; David Juras; Laurence Nigay

In this paper we address the problem of the development of multimodal interfaces. We describe a three-dimensional characterization space for software components along with its implementation in a component-based platform for rapidly developing multimodal interfaces. By graphically assembling components, the designer/developer describes the transformation chain from physical devices to tasks and vice-versa. In this context, the key point is to identify generic components that can be reused for different multimodal applications. Nevertheless for flexibility purposes, a mixed approach that enables the designer to use both generic components and tailored components is required. As a consequence, our characterization space includes one axis dedicated to the reusability aspect of a component. The two other axes of our characterization space, respectively depict the role of the component in the data-flow from devices to tasks and the level of specification of the component. We illustrate our three dimensional characterization space as well as the implemented tool based on it using a multimodal map navigator.


international conference on multimodal interfaces | 2009

Temporal aspects of CARE-based multimodal fusion: from a fusion mechanism to composition components and WoZ components

Marcos Serrano; Laurence Nigay

The CARE properties (Complementarity, Assignment, Redundancy and Equivalence) define various forms that multimodal input interaction can take. While Equivalence and Assignment express the availability and respective absence of choice between multiple input modalities for performing a given task, Complementarity and Redundancy describe relationships between modalities and require fusion mechanisms. In this paper we present a summary of the works we have carried using the CARE properties for conceiving and implementing multimodal interaction, as well as a new approach using WoZ components. We present different technical solutions for implementing the Complementarity and Redundancy of modalities with a focus on the temporal aspects of the fusion. Starting from a monolithic fusion mechanism, we then explain our component-based approach and the composition components (i.e., Redundancy and Complementarity components). As a new contribution for exploring design solutions before implementing an adequate fusion mechanism as well as for tuning the temporal aspects of the performed fusion, we introduce Wizard of Oz (WoZ) fusion components. We illustrate the composition components as well as the implemented tools exploiting them using several multimodal systems including a multimodal slide viewer and a multimodal map navigator.

Collaboration


Dive into the Marcos Serrano's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Laurence Nigay

Joseph Fourier University

View shared research outputs
Top Co-Authors

Avatar

Barrett Ens

University of Manitoba

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge