Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chloé Clavel is active.

Publication


Featured researches published by Chloé Clavel.


international conference on multimedia and expo | 2005

Events Detection for an Audio-Based Surveillance System

Chloé Clavel; Thibaut Ehrette; Gaël Richard

The present research deals with audio events detection in noisy environments for a multimedia surveillance application. In surveillance or homeland security most of the systems aiming to automatically detect abnormal situations are only based on visual clues while, in some situations, it may be easier to detect a given event using the audio information. This is in particular the case for the class of sounds considered in this paper, sounds produced by gun shots. The automatic shot detection system presented is based on a novelty detection approach which offers a solution to detect abnormality (abnormal audio events) in continuous audio recordings of public places. We specifically focus on the robustness of the detection against variable and adverse conditions and the reduction of the false rejection rate which is particularly important in surveillance applications. In particular, we take advantage of potential similarity between the acoustic signatures of the different types of weapons by building a hierarchical classification system


Speech Communication | 2008

Fear-type emotion recognition for future audio-based surveillance systems

Chloé Clavel; Ioana Vasilescu; Laurence Devillers; Gaël Richard; Thibaut Ehrette

This paper addresses the issue of automatic emotion recognition in speech. We focus on a type of emotional manifestation which has been rarely studied in speech processing: fear-type emotions occurring during abnormal situations (here, unplanned events where human life is threatened). This study is dedicated to a new application in emotion recognition - public safety. The starting point of this work is the definition and the collection of data illustrating extreme emotional manifestations in threatening situations. For this purpose we develop the SAFE corpus (situation analysis in a fictional and emotional corpus) based on fiction movies. It consists of 7h of recordings organized into 400 audiovisual sequences. The corpus contains recordings of both normal and abnormal situations and provides a large scope of contexts and therefore a large scope of emotional manifestations. In this way, not only it addresses the issue of the lack of corpora illustrating strong emotions, but also it forms an interesting support to study a high variety of emotional manifestations. We define a task-dependent annotation strategy which has the particularity to describe simultaneously the emotion and the situation evolution in context. The emotion recognition system is based on these data and must handle a large scope of unknown speakers and situations in noisy sound environments. It consists of a fear vs. neutral classification. The novelty of our approach relies on dissociated acoustic models of the voiced and unvoiced contents of speech. The two are then merged at the decision step of the classification system. The results are quite promising given the complexity and the diversity of the data: the error rate is about 30%.


IEEE Transactions on Affective Computing | 2016

Sentiment Analysis: From Opinion Mining to Human-Agent Interaction

Chloé Clavel; Zoraida Callejas

The opinion mining and human-agent interaction communities are currently addressing sentiment analysis from different perspectives that comprise, on the one hand, disparate sentiment-related phenomena and computational representations, and on the other hand, different detection and dialog management methods. In this paper we identify and discuss the growing opportunities for cross-disciplinary work that may increase individual advances. Sentiment/opinion detection methods used in human-agent interaction are indeed rare and, when they are employed, they are not different from the ones used in opinion mining and consequently not designed for socio-affective interactions (timing constraint of the interaction, sentiment analysis as an input and an output of interaction strategies). To support our claims, we present a comparative state of the art which analyzes the sentiment-related phenomena and the sentiment detection methods used in both communities and makes an overview of the goals of socio-affective human-agent strategies. We propose then different possibilities for mutual benefit, specifying several research tracks and discussing the open questions and prospects. To show the feasibility of the general guidelines proposed we also approach them from a specific perspective by applying them to the case of the Greta embodied conversational agents platform and discuss the way they can be used to make a more significative sentiment analysis for human-agent interactions in two different use cases: job interviews and dialogs with museum visitors.


international conference on acoustics, speech, and signal processing | 2007

Detection and Analysis of Abnormal Situations Through Fear-Type Acoustic Manifestations

Chloé Clavel; Laurence Devillers; Gaël Richard; I. Vasilexcu; T. Ehrette

Recent work on emotional speech processing has demonstrated the interest to consider the information conveyed by the emotional component in speech to enhance the understanding of human behaviors. But to date, there has been little integration of emotion detection systems in effective applications. The present research focuses on the development of a fear-type emotions recognition system to detect and analyze abnormal situations for surveillance applications. The Fear vs. Neutral classification gets a mean accuracy rate at 70.3%. It corresponds to quite optimistic results given the diversity of fear manifestations illustrated in the data. More specific acoustic models are built inside the fear class by considering the context of emergence of the emotional manifestations, i.e. the type of the threat during which they occur, and which has a strong influence on fear acoustic manifestations. The potential use of these models for a threat type recognition system is also investigated. Such information about the situation can indeed be useful for surveillance systems.


language resources and evaluation | 2013

Spontaneous speech and opinion detection: mining call-centre transcripts

Chloé Clavel; Gilles Adda; Frederik Cailliau; Martine Garnier-Rizet; Ariane Cavet; Géraldine Chapuis; Sandrine Courcinous; Charlotte Danesi; Anne-Laure Daquo; Myrtille Deldossi; Sylvie Guillemin-Lanne; Marjorie Seizou; Philippe Suignard

Opinion mining on conversational telephone speech tackles two challenges: the robustness of speech transcriptions and the relevance of opinion models. The two challenges are critical in an industrial context such as marketing. The paper addresses jointly these two issues by analyzing the influence of speech transcription errors on the detection of opinions and business concepts. We present both modules: the speech transcription system, which consists in a successful adaptation of a conversational speech transcription system to call-centre data and the information extraction module, which is based on a semantic modeling of business concepts, opinions and sentiments with complex linguistic rules. Three models of opinions are implemented based on the discourse theory, the appraisal theory and the marketers’ expertise, respectively. The influence of speech recognition errors on the information extraction module is evaluated by comparing its outputs on manual versus automatic transcripts. The F-scores obtained are 0.79 for business concepts detection, 0.74 for opinion detection and 0.67 for the extraction of relations between opinions and their target. This result and the in-depth analysis of the errors show the feasibility of opinion detection based on complex rules on call-centre transcripts.


Proceedings of the 2010 international workshop on Searching spontaneous conversational speech | 2010

Impact of spontaneous speech features on business concept detection: a study of call-centre data.

Charlotte Danesi; Chloé Clavel

This paper focuses on the detection of business concepts in call-centre conversation transcriptions. In the literature, information extraction behavior has been rarely deeply analyzed on such spontaneous speech data. We highlight here the various problems that are encountered when we attempt to extract information from such data. The recall and precision, which are obtained by comparing the concept detection method on automatic vs. manual transcription, are respectively at 74.8% and 77.7%. We find that, even though the concept detection is similar on the whole between manual and automatic transcriptions, spontaneous speech features tend to cause different behaviors of opinion-related concept detection on both transcriptions. On the one hand, spontaneous speech features, which frequently occur in these data, provokes silence (lack of detection) when detecting concepts on both transcriptions. On the other hand, ASR errors (e.g. due to homophony or disfluencies) tend to provoke noise (excessive detection) when detecting concept on automatic transcription.


Computer Speech & Language | 2011

Fiction support for realistic portrayals of fear-type emotional manifestations

Chloé Clavel; Ioana Vasilescu; Laurence Devillers

The present paper aims at filling the lack that currently exists with respect to databases containing emotional manifestations. Emotions, such as strong emotions, are indeed difficult to collect in real-life. They occur during contexts, which are generally unpredictable, and some of them such as anger are less frequent in public life than in private. Even though such emotions are not so present in existing databases, the need for applications, which target them (crisis management, surveillance, strategic intelligence, etc.), and the need for emotional recordings is even more acute. We propose here to use fictional media to compensate for the difficulty of collecting strong emotions. Emotions in realistic fictions are portrayed by skilled actors in interpersonal interactions. The mise-en-scene of the actors tends to stir genuine emotions. In addition, fiction offers an overall view of emotional manifestations in various real-life contexts: face-to-face interactions, phone calls, interviews, emotional event reporting vs. in situ emotional manifestations. A fear-type emotion recognition system has been developed, that is based on acoustic models learnt from the fiction corpus. This paper aims at providing an in-depth analysis of the various factors that may influence the system behaviour: the annotation issue and the acoustic features behaviour. These two aspects emphasize the main feature of fiction: the variety of the emotional manifestations and of their context.


intelligent virtual agents | 2016

Using Temporal Association Rules for the Synthesis of Embodied Conversational Agents with a Specific Stance

Thomas Janssoone; Chloé Clavel; Kevin Bailly; Gaël Richard

In the field of Embodied Conversational Agent (ECA) one of the main challenges is to generate socially believable agents. The long run objective of the present study is to infer rules for the multimodal generation of agents’ socio-emotional behaviour. In this paper, we introduce the Social Multimodal Association Rules with Timing (SMART) algorithm. It proposes to learn the rules from the analysis of a multimodal corpus composed by audio-video recordings of human-human interactions. The proposed methodology consists in applying a Sequence Mining algorithm using automatically extracted Social Signals such as prosody, head movements and facial muscles activation as an input. This allows us to infer Temporal Association Rules for the behaviour generation. We show that this method can automatically compute Temporal Association Rules coherent with prior results found in the literature especially in the psychology and sociology fields. The results of a perceptive evaluation confirms the ability of a Temporal Association Rules based agent to express a specific stance.


Toward Robotic Socially Believable Behaving Systems (II) | 2016

Fostering User Engagement in Face-to-Face Human-Agent Interactions: A Survey

Chloé Clavel; Angelo Cafaro; Sabrina Campano; Catherine Pelachaud

Embodied conversational agents are capable of carrying a face-to-face interaction with users. Their use is substantially increasing in numerous applications ranging from tutoring systems to ambient assisted living. In such applications, one of the main challenges is to keep the user engaged in the interaction with the agent. The present chapter provides an overview of the scientific issues underlying the engagement paradigm, including a review on methodologies for assessing user engagement in human-agent interaction. It presents three studies that have been conducted within the Greta/VIB platforms. These studies aimed at designing engaging agents using different interaction strategies (alignment and dynamical coupling) and the expression of interpersonal attitudes in multi-party interactions.


international joint conference on natural language processing | 2015

Improving social relationships in face-to-face human-agent interactions: when the agent wants to know user's likes and dislikes

Caroline Langlet; Chloé Clavel

This paper tackles the issue of the detection of user’s verbal expressions of likes and dislikes in a human-agent interaction. We present a system grounded on the theoretical framework provided by (Martin and White, 2005) that integrates the interaction context by jointly processing agent’s and user’s utterances. It is designed as a rule-based and bottom-up process based on a symbolic representation of the structure of the sentence. This article also describes the annotation campaign – carried out through Amazon Mechanical Turk – for the creation of the evaluation dataset. Finally, we present all measures for rating agreement between our system and the human reference and obtain agreement scores that are equal or higher than substantial agreements.

Collaboration


Dive into the Chloé Clavel's collaboration.

Top Co-Authors

Avatar

Catherine Pelachaud

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gaël Richard

Université Paris-Saclay

View shared research outputs
Top Co-Authors

Avatar

Ioana Vasilescu

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Magalie Ochs

Aix-Marseille University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge