Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ali Choumane is active.

Publication


Featured researches published by Ali Choumane.


ieee virtual reality conference | 2010

Buttonless clicking: Intuitive select and pick-release through gesture analysis

Ali Choumane; Géry Casiez; Laurent Grisoni

Clicking is a key feature any interaction input system needs to provide. In the case of 3D input devices, such a feature is often difficult to provide (e.g. vision-based, or tracking systems for free-hand interaction do not natively provide any button). In this work, we show that it is actually possible to build an application that provides two classical interaction tasks (selection, and pick-release), without any button-like feature. Our method is based on trajectory and kinematic gesture analysis. In a preliminary study we exhibit the principle of the method. Then, we detail an algorithm to discriminate selection, pick and release tasks using kinematic criteria. We present a controlled experiment that validates our method with an average success rate equal to 90.1% across all conditions.


international conference on multimodal interfaces | 2008

Knowledge and data flow architecture for reference processing in multimodal dialog systems

Ali Choumane; Jacques Siroux

This paper is concerned with the part of the system dedicated to the processing of the users designation activities for multimodal search of information. We highlight the necessity of using specific knowledge for multimodal input processing. We propose and describe knowledge modeling as well as the associated processing architecture. Knowledge modeling is concerned with the natural language and the visual context; it is adapted to the kind of application and allows several types of filtering of the inputs. Part of this knowledge is dynamically updated to take into account the interaction history. In the proposed architecture, each input modality is processed first by using the modeled knowledge, producing intermediate structures. Next a fusion of these structures allows the determination of the referent aimed at by using dynamic knowledge. The steps of this last process take into account the possible combinations of modalities as well as the clues carried by each modality (linguistic clues, gesture type). The development of this part of our system is mainly complete and tested.


Proceedings of the 2007 workshop on Multimodal interfaces in semantic interaction | 2007

A model for multimodal representation and processing for reference resolution

Ali Choumane; Jacques Siroux

We present a model for dealing with designation activities of a user in multimodal systems. This model associates both a well defined language to each modality (NL, gesture, visual) and a mediator one. It takes into account several semantic features of modalities. Functions link objects from each modality to another, and allow reasoning and referent identification. Processing algorithms related to each language are developed.


Social Network Analysis and Mining | 2014

A semantic similarity-based social information retrieval model

Ali Choumane

Abstract Social networks include millions upon millions of users that share and access volume of information. Usually, users of social networks specify in their profiles some skills, hobbies, and interests. These profiles are enriched while interacting. None of the existing social network sites allows impersonal search, i.e., search of new contacts that have specified skills or interests. In this paper, we present a new approach to search people in social networks using pseudo-natural language queries. We make two technical contributions: (1) we integrate a generic and semantic modeling of profiles in the search process. (2) We propose a new semantic query-profiles matching function that extract relevant profiles. We have implemented an experimental prototype to validate our approach that shows encouraging results.


international conference on human computer interaction | 2009

Modeling and Using Salience in Multimodal Interaction Systems

Ali Choumane; Jacques Siroux

We are interested in input to human-machine multimodal interaction systems for geographical information search. In our context of study, the system offers to the user the ability of using speech, gesture and visual modes. The system displays a map on the screen, the user ask the system about sites (hotels, campsites, ...) by specifying a place of search. Referenced places are objects in the visual context like cities, road, river, etc. The system should determine the designated object to complete the understanding process of users request. In this context, we aim to improve the reference resolution process while taking into account ambiguous designations. In this paper, we focus on the modeling of visual context. In this modeling we take into account the notion of salience, its role in the designation and in the processing methods.


document engineering | 2005

Integrating translation services within a structured editor

Ali Choumane; Hervé Blanchon; Cécile Roisin


TAL | 2006

Traduction automatisée fondée sur le dialogue et documents auto-explicatifs : bilan du projet LIDIA

Hervé Blanchon; Christian Boitet; Ali Choumane


Intelligent Environments, 2007. IE 07. 3rd IET International Conference on | 2007

Interpretation of multimodal designation with imprecise gesture

Ali Choumane; Jacques Siroux


International Journal of Computer Science: Theory and Application | 2016

Friend Recommendation based on Hashtags Analysis

Ali Choumane; Zein Al Abidin Ibrahim


Archive | 2008

Traitement générique des références dans le cadre multimodal parole-image-tactile

Ali Choumane

Collaboration


Dive into the Ali Choumane's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hervé Blanchon

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Majd Ghareeb

Lebanese International University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge