Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mohamed A. Sehili is active.

Publication


Featured researches published by Mohamed A. Sehili.


affective computing and intelligent interaction | 2015

Multimodal data collection of human-robot humorous interactions in the Joker project

Laurence Devillers; Sophie Rosset; Guillaume Dubuisson Duplessis; Mohamed A. Sehili; Lucile Bechade; Agnes Delaborde; Clément Gossart; Vincent Letard; Fan Yang; Yücel Yemez; Bekir Berker Turker; T. Metin Sezgin; Kevin El Haddad; Stéphane Dupont; Daniel Luzzati; Yannick Estève; Emer Gilmartin; Nick Campbell

Thanks to a remarkably great ability to show amusement and engagement, laughter is one of the most important social markers in human interactions. Laughing together can actually help to set up a positive atmosphere and favors the creation of new relationships. This paper presents a data collection of social interaction dialogs involving humor between a human participant and a robot. In this work, interaction scenarios have been designed in order to study social markers such as laughter. They have been implemented within two automatic systems developed in the Joker project: a social dialog system using paralinguistic cues and a task-based dialog system using linguistic content. One of the major contributions of this work is to provide a context to study human laughter produced during a human-robot interaction. The collected data will be used to build a generic intelligent user interface which provides a multimodal dialog system with social communication skills including humor and other informal socially oriented behaviors. This system will emphasize the fusion of verbal and non-verbal channels for emotional and social behavior perception, interaction and generation capabilities.


international conference on social robotics | 2015

Cross-Corpus Experiments on Laughter and Emotion Detection in HRI with Elderly People

Marie Tahon; Mohamed A. Sehili; Laurence Devillers

Social Signal Processing such as laughter or emotion detection is a very important issue, particularly in the field of human-robot interaction (HRI). At the moment, very few studies exist on elderly-people’s voices and social markers in real-life HRI situations. This paper presents a cross-corpus study with two realistic corpora featuring elderly people (ROMEO2 and ARMEN) and two corpora collected in laboratory conditions with young adults (JEMO and OFFICE). The goal of this experiment is to assess how good data from one given corpus can be used as a training set for another corpus, with a specific focus on elderly people voices. First, clear differences between elderly people real-life data and young adults laboratory data are shown on acoustic feature distributions (such as \(F_0\) standard deviation or local jitter). Second, cross-corpus emotion recognition experiments show that elderly people real-life corpora are much more complex than laboratory corpora. Surprisingly, modeling emotions with an elderly people corpus do not generalize to another elderly people corpus collected in the same acoustic conditions but with different speakers. Our last result is that laboratory laughter is quite homogeneous across corpora but this is not the case for elderly people real-life laughter.


international conference of the ieee engineering in medicine and biology society | 2013

The Sweet-Home project: Audio processing and decision making in smart home to improve well-being and reliance

Michel Vacher; Pedro Chahuara; Benjamin Lecouteux; Dan Istrate; François Portet; Thierry Joubert; Mohamed A. Sehili; Brigitte Meillon; Nicolas Bonnefond; Sebastien Fabre; Camille Roux; Sybille Caffiau

The Sweet-Home project aims at providing audio-based interaction technology that lets the user have full control over their home environment, at detecting distress situations and at easing the social inclusion of the elderly and frail population. This paper presents an overview of the project focusing on the implemented techniques for speech and sound recognition as context-aware decision making with uncertainty. A user experiment in a smart home demonstrates the interest of this audio-based technology.


international conference on social robotics | 2015

Smile and Laughter Detection for Elderly People-Robot Interaction

Fan Yang; Mohamed A. Sehili; Claude Barras; Laurence Devillers

Affect bursts play an important role in non-verbal social interaction. Laughter and smile are some of the most important social markers in human-robot social interaction. Not only do they contain affective information, they also may reveal the user’s communication strategy. In the context of human robot interaction, an automatic laughter and smile detection system may thus help the robot to adapt its behavior to a given user’s profile by adopting a more relevant communication scheme. While many interesting works on laughter and smile detection have been done, only few of them focused on elderly people. Elderly people data are relatively rare and often carry a significant challenge to a laughter and smile detection system due to face wrinkles and an often lower voice quality. In this paper, we address laughter and smile detection in the ROMEO2 corpus, a multimodal (audio and video) corpus of elderly people-robot interaction. We show that, while a single modality yields a given performance, a fair improvement can be reached by combining the two modalities.


international conference on multimodal interfaces | 2015

Behavioral and Emotional Spoken Cues Related to Mental States in Human-Robot Social Interaction

Lucile Bechade; Guillaume Dubuisson Duplessis; Mohamed A. Sehili; Laurence Devillers

Understanding human behavioral and emotional cues occurring in interaction has become a major research interest due to the emergence of numerous applications such as in social robotics. While there is agreement across different theories that some behavioral signals are involved in communicating information, there is a lack of consensus regarding their specificity, their universality, and whether they convey emotions, affective, cognitive, mental states or all of those. Our goal in this study is to explore the relationship between behavioral and emotional cues extracted from speech (e.g., laughter, speech duration, negative emotions) with different communicative information about the human participant. This study is based on a corpus of audio/video data of humorous interactions between the nao{} robot and 37 human participants. Participants filled three questionnaires about their personality, sense of humor and mental states regarding the interaction. This work reveals the existence of many links between behavioral and emotional cues and the mental states reported by human participants through self-report questionnaires. However, we have not found a clear connection between reported mental states and participants profiles.


Proceedings of the Fourth Workshop on Speech and Language Processing for Assistive Technologies | 2013

Experimental Evaluation of Speech Recognition Technologies for Voice-based Home Automation Control in a Smart Home

Michel Vacher; Benjamin Lecouteux; Dan Istrate; Thierry Joubert; François Portet; Mohamed A. Sehili; Pedro Chahuara


International Journal of Social Robotics | 2015

Inference of Human Beings’ Emotional States from Speech in Human–Robot Interactions

Laurence Devillers; Marie Tahon; Mohamed A. Sehili; Agnes Delaborde


conference of the international speech communication association | 2013

Evaluation of a Real-Time Voice Order Recognition System from Multiple Audio Channels in a Home

Michel Vacher; Benjamin Lecouteux; Dan Istrate; Thierry Joubert; François Portet; Mohamed A. Sehili; Pedro Chahuara


human robot interaction | 2014

Attention Detection in Elderly People-Robot Spoken Interaction

Mohamed A. Sehili; Fan Yang; Laurence Devillers


conference of the international speech communication association | 2015

Nao is doing humour in the CHIST-ERA JOKER project

Guillaume Dubuisson Duplessis; Lucile Bechade; Mohamed A. Sehili; Agnes Delaborde; Vincent Letard; Anne-Laure Ligozat; Paul Deléglise; Yannick Estève; Sophie Rosset; Laurence Devillers

Collaboration


Dive into the Mohamed A. Sehili's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Agnes Delaborde

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

François Portet

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Guillaume Dubuisson Duplessis

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Lucile Bechade

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Marie Tahon

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Michel Vacher

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Pedro Chahuara

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dan Istrate

École Normale Supérieure

View shared research outputs
Researchain Logo
Decentralizing Knowledge