Sylwia Julia Hyniewska
Télécom ParisTech
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sylwia Julia Hyniewska.
intelligent virtual agents | 2010
Elisabetta Bevacqua; Sathish Pammi; Sylwia Julia Hyniewska; Marc Schröder; Catherine Pelachaud
One of the most desirable characteristics of an Embodied Conversational Agent (ECA) is the capability of interacting with users in a human-like manner. While listening to a user, an ECA should be able to provide backchannel signals through visual and acoustic modalities. In this work we propose an improvement of our previous system to generate multimodal backchannel signals on visual and acoustic modalities. A perceptual study has been performed to understand how context-free multimodal backchannels are interpreted by users.
Journal on Multimodal User Interfaces | 2012
Elisabetta Bevacqua; Etienne de Sevin; Sylwia Julia Hyniewska; Catherine Pelachaud
We present a computational model that generates listening behaviour for a virtual agent. It triggers backchannel signals according to the user’s visual and acoustic behaviour. The appropriateness of the backchannel algorithm in a user-agent situation of storytelling, has been evaluated by naïve participants, who judged the algorithm-ruled timing of backchannels more positively than a random timing. The system can generate different types of backchannels. The choice of the type and the frequency of the backchannels to be displayed is performed considering the agent’s personality traits. The personality of the agent is defined in terms of two dimensions, extroversion and neuroticism. We link agents with a higher level of extroversion to a higher tendency to perform more backchannels than introverted ones, and we link neuroticism to less mimicry production and more response and reactive signals sent. We run a perception study to test these relations in agent-user interactions, as evaluated by third parties. We find that the selection of the frequency of backchannels performed by our algorithm contributes to the correct interpretation of the agent’s behaviour in terms of personality traits.
affective computing and intelligent interaction | 2009
Radoslaw Niewiadomski; Sylwia Julia Hyniewska; Catherine Pelachaud
A model of multimodal sequential expressions of emotion for an Embodied Conversational Agent was developed. The model is based on video annotations and on descriptions found in the literature. A language has been derived to describe expressions of emotions as a sequence of facial and body movement signals. An evaluation study of our model is presented in this paper. Animations of 8 sequential expressions corresponding to the emotions — anger, anxiety, cheerfulness, embarrassment, panic fear, pride, relief, and tension — were realized with our model. The recognition rate of these expressions is higher than the chance level making us believe that our model is able to generate recognizable expressions of emotions, even for the emotional expressions not considered to be universally recognized.
intelligent virtual agents | 2010
Etienne de Sevin; Sylwia Julia Hyniewska; Catherine Pelachaud
Our aim is to build a real-time Embodied Conversational Agent able to act as an interlocutor in interaction, generating automatically verbal and non verbal signals. These signals, called backchannels, provide information about the listeners mental state towards the perceived speech. The ECA reacts differently to users behavior depending on its predefined personality. Personality influences the generation and the selection of backchannels. In this paper, we propose a listeners action selection algorithm working in real-time to choose the type and the frequency of backchannels to be displayed by the ECA in accordance with its personality. The algorithm is based on the extroversion and neuroticism dimensions of personality. We present an evaluation on how backchanels managed by this algorithm are congruent with intuitive expectations of participants in terms of behavior specific to different personalities.
intelligent virtual agents | 2009
Radoslaw Niewiadomski; Sylwia Julia Hyniewska; Catherine Pelachaud
In this paper we present a system which allows a virtual character to display multimodal sequential expressions i.e. expressions that are composed of different signals partially ordered in time and belonging to different nonverbal communicative channels. It is composed of a language for the description of such expressions from real data and of an algorithm that uses this description to automatically generate emotional displays. We explain in detail the process of creating multimodal sequential expressions, from the annotation to the synthesis of the behavior.
Archive | 2017
Ikechukwu Ofodile; Kaustubh Kulkarni; Ciprian A. Corneanu; Sergio Escalera; Xavier Baró; Sylwia Julia Hyniewska; Jüri Allik; Gholamreza Anbarjafari
Archive | 2013
Radoslaw Niewiadomski; Sylwia Julia Hyniewska; Catherine Pelachaud
Archive | 2010
Peter Bleackley; Sylwia Julia Hyniewska; Radoslaw Niewiadomski; Catherine Pelachaud
IEEE Transactions on Affective Computing | 2018
Kaustubh Kulkarni; Ciprian A. Corneanu; Ikechukwu Ofodile; Sergio Escalera; Xavier Baró; Sylwia Julia Hyniewska; Jüri Allik; Gholamreza Anbarjafari
Emotion-Oriented Systems | 2013
Sylwia Julia Hyniewska; Radoslaw Niewiadomski; Catherine Pelachaud