Yosra Rekik
university of lille
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yosra Rekik.
international conference on human-computer interaction | 2013
Yosra Rekik; Laurent Grisoni; Nicolas Roussel
Multi-touch gestures are often thought by application designers for a one-to-one mapping between gestures and commands, which does not take into account the high variability of user gestures for actions in the physical world; it can also be a limitation that leads to very simplistic interaction choices. Our motivation is to make a step toward many-to-one mappings between user gestures and commands, by understanding user gestures variability for multi-touch systems; for doing so, we set up a user study in which we target symbolic gestures on tabletops. From a first phase study we provide qualitative analysis of user gesture variability; we derive this analysis into a taxonomy of user gestures, that is discussed and compared to other existing taxonomies. We introduce the notion of atomic movement; such elementary atomic movements may be combined throughout time (either sequentially or in parallel), to structure user gesture. A second phase study is then performed with specific class of gesture-drawn symbols; from this phase, and according to the provided taxonomy, we evaluate user gesture variability with a fine grain quantitative analysis. Our findings indicate that users equally use one or two hands, also that more than half of gestures are achieved using parallel or sequential combination of atomic movements. We also show how user gestures distribute over different movement categories, and correlate to the number of fingers and hands engaged in interaction. Finally, we discuss implications of this work to interaction design, practical consequences on gesture recognition, and potential applications.
human factors in computing systems | 2017
Yosra Rekik; Eric Vezzoli; Laurent Grisoni; Frédéric Giraud
We investigate the relevance of surface haptic rendering techniques for tactile devices. We focus on the two major existing techniques and show that they have complementary benefits. The first one, called textsc{S}urface textsc{H}aptic textsc{O}bject (textsc{SHO}), which is based on finger position, is shown to be more suitable to render sparse textures; while the second one, called textsc{S}urface textsc{H}aptic textsc{T}exture (textsc{SHT}), which is based on finger velocity, is shown to be more suitable for dense textures and fast finger movements. We hence propose a new rendering technique, called textsc{L}ocalized textsc{H}aptic textsc{T}exture (textsc{LHT}), which is based on the concept of textit{taxel} considered as an elementary tactile information that is rendered on the screen. By using a grid of taxels to encode a texture, textsc{LHT} is shown to provide a consistent tactile rendering across different velocities for high density textures, and is found to reduce user textit{error rate} by up to 77.68% compared to textsc{SHO}.
Proceedings of the 2016 ACM International Conference on Interactive Surfaces and Spaces | 2016
Farzan Kalantari; Laurent Grisoni; Frédéric Giraud; Yosra Rekik
Tactile devices with ultrasonic vibrations (based on squeeze film effect) using piezoelectric actuators are one of the existing haptic feedback technologies. In this study we have performed two psychophysical experiments on an ultrasonic haptic tablet, in order to find the minimum size of a tactile element on which all the users are able to perfectly identify different types of textures. Our results show that the spatial resolution of the tactile element on haptic touchscreen actually varies, depending on the number and types of tactile feedback information. A first experiment exhibits three different tactile textures, chosen as being easily recognized by users. We use these textures in a second experiment, and evaluate minimal spatial area on which the chosen set of textures can be recognized. Among other, we find the minimal size depends on the texture nature.
Collaboration Meets Interactive Spaces | 2016
Yosra Rekik; Radu-Daniel Vatavu; Laurent Grisoni
Expressivity of hand movements is much greater than what current interaction techniques enable in touch-screen input. Especially for collaboration, hands are used to interact but also to express intentions, point to the physical space in which collaboration takes place, and communicate meaningful actions to collaborators. Various types of interaction are enabled by multi-touch surfaces (singe and both hands, single and multiple fingers, etc.), and standard approaches to tactile interactive systems usually fail in handling such complexity of expresion. The diversity of multi-touch input also makes designing multi-touch gestures a difficult task. We believe that one cause for this design challenge is our limited understanding of variability in multi-touch gesture articulation, which affects users’ opportunities to use gestures effectively in current multi-touch interfaces. A better understanding of multi-touch gesture variability can also lead to more robust design to support different users’ gesture preferences. In this chapter we present our results on multi-touch gesture variability. We are mainly concerned with understanding variability in multi-touch gestures articulation from a pure user-centric perspective. We present a comprehensive investigation on how users vary their gestures in multi-touch gestures even under unconstrained articulation conditions. We conducted two experiments from which we collected 6669 multi-touch gestures from 46 participants. We performed a qualitative analysis of user gesture variability to derive a taxonomy for users’ multi-touch gestures that complements other existing taxonomies. We also provide a comprehensive analysis on the strategies employed by users to create different gesture articulation variations for the same gesture type.
intelligent user interfaces | 2018
Hanaë Rateau; Yosra Rekik; Edward Lank; Laurent Grisoni
In mobile touchscreebn interaction, an important challenge is to find solutions to balance the size of individual widgets against the number of widgets needed during interaction. In this work, to address display space limitations, we explore the design of invisible off-screen toolbars (ether-toolbars) that leverage computer vision to expand application features by placing widgets adjacent to the display screen. We show how simple computer vision algorithms can be combined with a natural human ability to estimate physical placing to support highly accurate targeting. Our ether-toolbar design promises targeting accuracy approximating on-screen widget accuracy while significantly expanding the interaction space of mobile devices. Through two experiments, we examine off-screen content placement metaphors and off-screen precision of participants accessing these toolbars. From the data of the second experiment, we provide a basic model that reflects how users perceive mobile surroundings for ether-widgets and validate it. We also demonstrate a prototype system consisting of an inexpensive 3D printed mount for a mirror that supports ether-toolbar implementations. Finally, we discuss the implications of our work and potential design extensions that can increase the usability and the utility of ether-toolbars.
international conference on human-computer interaction | 2017
Orlando Erazo; Yosra Rekik; Laurent Grisoni; José A. Pino
Interfaces based on mid-air gestures often use a one-to-one mapping between gestures and commands, but most remain very basic. Actually, people exhibit inherent intrinsic variations for their gesture articulations because gestures carry dependency with both the person producing them and the specific context, social or cultural, in which they are being produced. We advocate that allowing applications to map many gestures to one command is a key step to give more flexibility, avoid penalizations, and lead to better user interaction experiences. Accordingly, this paper presents our results on mid-air gesture variability. We are mainly concerned with understanding variability in mid-air gesture articulations from a pure user-centric perspective. We describe a comprehensive investigation on how users vary the production of gestures under unconstrained articulation conditions. The conducted user study consisted in two tasks. The first one provides a model of user conception and production of gestures; from this study we also derive an embodied taxonomy of gestures. This taxonomy is used as a basis for the second experiment, in which we perform a fine grain quantitative analysis of gesture articulation variability. Based on these results, we discuss implications for gesture interface designs.
Proceedings of the 2016 ACM International Conference on Interactive Surfaces and Spaces | 2016
Hanaë Rateau; Yosra Rekik; Laurent Grisoni; Joaquim A. Jorge
We present an interaction technique combining tactile actions and midair pointing to access out-of-reach content on large displays without the need to walk across the display. Users can start through a touch gesture on the display surface and finish midair by pointing to push content away or inversely to retrieve a content. The technique takes advantage of well-known semantics of pointing in human-to-human interaction. These, coupled with the semantics of proximal relations and deictic proxemics make the proposed technique very powerful as it leverages on well-understood human-human interaction modalities. Experimental results show this technique to outperform direct tactile interaction on dragging tasks. From our experience we derive four guidelines for interaction with large-scale displays.
international conference on multimodal interfaces | 2014
Yosra Rekik; Radu-Daniel Vatavu; Laurent Grisoni
advanced visual interfaces | 2014
Yosra Rekik; Radu-Daniel Vatavu; Laurent Grisoni
ieee haptics symposium | 2018
Farzan Kalantari; Edward Lank; Yosra Rekik; Laurent Grisoni; Frédéric Giraud