Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Radoslaw Niewiadomski is active.

Publication


Featured researches published by Radoslaw Niewiadomski.


International Journal of Humanoid Robotics | 2006

MULTIMODAL COMPLEX EMOTIONS: GESTURE EXPRESSIVITY AND BLENDED FACIAL EXPRESSIONS

Jean-Claude Martin; Radoslaw Niewiadomski; Laurence Devillers; Stéphanie Buisine; Catherine Pelachaud

One of the challenges of designing virtual humans is the definition of appropriate models of the relation between realistic emotions and the coordination of behaviors in several modalities. In this paper, we present the annotation, representation and modeling of multimodal visual behaviors occurring during complex emotions. We illustrate our work using a corpus of TV interviews. This corpus has been annotated at several levels of information: communicative acts, emotion labels, and multimodal signs. We have defined a copy-synthesis approach to drive an Embodied Conversational Agent from these different levels of information. The second part of our paper focuses on a model of complex (superposition and masking of) emotions in facial expressions of the agent. We explain how the complementary aspects of our work on corpus and computational model is used to specify complex emotional behaviors.


affective computing and intelligent interaction | 2005

Intelligent expressions of emotions

Magalie Ochs; Radoslaw Niewiadomski; Catherine Pelachaud; David Sadek

We propose an architecture of an embodied conversational agent that takes into account two aspects of emotions: the emotions triggered by an event (the felt emotions) and the expressed emotions (the displayed ones), which may differ in real life. In this paper, we present a formalization of emotion eliciting-events based on a model of the agent’s mental state composed of beliefs, choices, and uncertainties. This model enables to identify the emotional state of an agent at any time. We also introduce a computational model based on fuzzy logic that computes facial expressions of emotions blending. Finally, examples of facial expressions resulting from the implementation of our model are shown.


intelligent virtual agents | 2006

Perception of blended emotions: from video corpus to expressive agent

Stéphanie Buisine; Sarkis Abrilian; Radoslaw Niewiadomski; Jean-Claude Martin; Laurence Devillers; Catherine Pelachaud

Real life emotions are often blended and involve several simultaneous superposed or masked emotions. This paper reports on a study on the perception of multimodal emotional behaviors in Embodied Conversational Agents. This experimental study aims at evaluating if people detect properly the signs of emotions in different modalities (speech, facial expressions, gestures) when they appear to be superposed or masked. We compared the perception of emotional behaviors annotated in a corpus of TV interviews and replayed by an expressive agent at different levels of abstraction. The results provide insights on the use of such protocols for studying the effect of various models and modalities on the perception of complex emotions.


affective computing and intelligent interaction | 2007

Model of Facial Expressions Management for an Embodied Conversational Agent

Radoslaw Niewiadomski; Catherine Pelachaud

In this paper we present a model of facial behaviour encompassing interpersonal relations for an Embodied Conversational Agent (ECA). Although previous solutions of this problem exist in ECAs domain, in our approach a variety of facial expressions (i.e. expressed, masked, inhibited, and fake expressions) is used for the first time. Moreover, our rules of facial behaviour management are consistent with the predictions of politeness theory as well as the experimental data (i.e. annotation of the video-corpus). Knowing the affective state of the agent and the type of relations between interlocutors the system automatically adapts the facial behaviour of an agent to the social context. We present also the evaluation study we have conducted of our model. In this experiment we analysed the perception of interpersonal relations from the facial behaviour of our agent.


intelligent virtual agents | 2008

Expressions of Empathy in ECAs

Radoslaw Niewiadomski; Magalie Ochs; Catherine Pelachaud

Recent research has shown that empathic virtual agents enable to improve human-machine interaction. Virtual agents expressions of empathy are generally fixed intuitively and are not evaluated. In this paper, we propose a novel approach for the expressions of empathy using complex facial expressions like superposition and masking. An evaluation study have been conducted in order to identify the most appropriate way to express empathy. According to the evaluation results people find more suitable facial expressions that contain elements of emotion of empathy. In particular, complex facial expressions seem to be a good approach to express empathy.


Cognitive Processing | 2012

Smiling virtual agent in social context

Magalie Ochs; Radoslaw Niewiadomski; Paul M. Brunet; Catherine Pelachaud

A smile may communicate different communicative intentions depending on subtle characteristics of the facial expression. In this article, we propose an algorithm to determine the morphological and dynamic characteristics of virtual agent’s smiles of amusement, politeness, and embarrassment. The algorithm has been defined based on a virtual agent’s smiles corpus constructed by users and analyzed with a decision tree classification technique. An evaluation, in different contexts, of the resulting smiles has enabled us to validate the proposed algorithm.


Presence: Teleoperators & Virtual Environments | 2011

How is believability of a virtual agent related to warmth, competence, personification, and embodiment?

Virginie Demeure; Radoslaw Niewiadomski; Catherine Pelachaud

The term “believability” is often used to describe expectations concerning virtual agents. In this paper, we analyze which factors influence the believability of the agent acting as the software assistant. We consider several factors such as embodiment, communicative behavior, and emotional capabilities. We conduct a perceptive study where we analyze the role of plausible and/or appropriate emotional displays in relation to believability. We also investigate how people judge the believability of the agent, and whether it provokes social reactions of humans toward it. Finally, we evaluate the respective impact of embodiment and emotion over believability judgments. The results of our study show that (a) appropriate emotions lead to higher perceived believability, (b) the notion of believability is closely correlated with the two major socio-cognitive variables, namely competence and warmth, and (c) considering an agent as believable can be different from having a human-like attitude toward it. Finally, a primacy of emotion behavior over embodiment while judging believability is also hypothesized from free responses given by the participants of this experiment.


Proceedings of 4th International Workshop on Human Behavior Understanding - Volume 8212 | 2013

MMLI: Multimodal Multiperson Corpus of Laughter in Interaction

Radoslaw Niewiadomski; Maurizio Mancini; Tobias Baur; Giovanna Varni; Harry J. Griffin; Min S. H. Aung

The aim of the Multimodal and Multiperson Corpus of Laughter in Interaction (MMLI) was to collect multimodal data of laughter with the focus on full body movements and different laughter types. It contains both induced and interactive laughs from human triads. In total we collected 500 laugh episodes of 16 participants. The data consists of 3D body position information, facial tracking, multiple audio and video channels as well as physiological data. In this paper we discuss methodological and technical issues related to this data collection including techniques for laughter elicitation and synchronization between different independent sources of data. We also present the enhanced visualization and segmentation tool used to segment captured data. Finally we present data annotation as well as preliminary results of the analysis of the nonverbal behavior patterns in laughter.


international conference on multimodal interfaces | 2014

Rhythmic Body Movements of Laughter

Radoslaw Niewiadomski; Maurizio Mancini; Yu Ding; Catherine Pelachaud; Gualtiero Volpe

In this paper we focus on three aspects of multimodal expressions of laughter. First, we propose a procedural method to synthesize rhythmic body movements of laughter based on spectral analysis of laughter episodes. For this purpose, we analyze laughter body motions from motion capture data and we reconstruct them with appropriate harmonics. Then we reduce the parameter space to two dimensions. These are the inputs of the actual model to generate a continuum of laughs rhythmic body movements. In the paper, we also propose a method to integrate rhythmic body movements generated by our model with other synthetized expressive cues of laughter such as facial expressions and additional body movements. Finally, we present a real-time human-virtual character interaction scenario where virtual character applies our model to answer to humans laugh in real-time. \


international conference on 3d web technology | 2011

Cross-media agent platform

Radoslaw Niewiadomski; Mohammad Obaid; Elisabetta Bevacqua; Julian Looser; Le Quoc Anh; Catherine Pelachaud

We have developed a general purpose use and modular architecture of an embodied conversational agent (ECA). Our agent is able to communicate using verbal and nonverbal channels like gaze, facial expressions, and gestures. Our architecture follows the SAIBA framework that sets 3-step process and communication protocols. In our implementation of SAIBA architecture we focus on flexibility and we introduce different levels of the customization. In particular, our system is able to display the same communicative intention with different embodiments, be a virtual agent or a robot. Moreover our framework is independent of the animation player technology. Agent animations can be displayed across different medias, such as web browser, virtual or augmented reality. In this paper we present our agent architecture and its main features.

Collaboration


Dive into the Radoslaw Niewiadomski's collaboration.

Top Co-Authors

Avatar

Catherine Pelachaud

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Magalie Ochs

Aix-Marseille University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge