Angelo Cafaro
Université Paris-Saclay
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Angelo Cafaro.
intelligent virtual agents | 2009
Angelo Cafaro; Raffaele Gaito; Hannes Högni Vilhjálmsson
In realistic looking game environments it is important that virtual characters behave naturally. Our goal is to produce naturally looking gaze behavior for animated agents and avatars that are simply idling. We studied people standing and waiting, as well as people walking down a shopping street. From our observations we built a demo with the CADIA Populus multi-agent social simulation platform.
intelligent virtual agents | 2015
Brian Ravenet; Angelo Cafaro; Beatrice Biancardi; Magalie Ochs; Catherine Pelachaud
In this paper we propose a computational model for the real time generation of nonverbal behaviors supporting the expression of interpersonal attitudes for turn-taking strategies and group formation in multi-party conversations among embodied conversational agents. Starting from the desired attitudes that an agent aims to express towards every other participant, our model produces the nonverbal behavior that should be exhibited in real time to convey such attitudes while managing the group formation and attempting to accomplish the agent’s own turn-taking strategy. We also propose an evaluation protocol for similar multi-agent configurations. We conducted a study following this protocol to evaluate our model. Results showed that subjects properly recognized the attitudes expressed by the agents through their nonverbal behavior and turn taking strategies generated by our system.
intelligent virtual agents | 2014
Angelo Cafaro; Hannes Högni Vilhjálmsson; Timothy W. Bickmore; Dirk Heylen; Catherine Pelachaud
The SAIBA framework proposes two interface languages to represent separately an intelligent agent’s communicative functions (or intents) and the multimodal behavior determining how the functions are accomplished with a particular multimodal realization. For the functional level, the Function Markup Language (FML) has been proposed. In this paper we summarize the current status of FML as discussed by the SAIBA community, we underline the major issues that need to be addressed to obtain a unified FML specification, we suggest further issues that we identified and we propose a new unified FML specification that addresses many of these issues.
ACM Transactions on Computer-Human Interaction | 2016
Angelo Cafaro; Hannes Högni Vilhjálmsson; Timothy W. Bickmore
In greeting encounters, first impressions of personality and attitude are quickly formed and might determine important relational decisions, such as the likelihood and frequency of subsequent encounters. An anthropomorphic user interface is not immune to these judgments, specifically when exhibiting social interaction skills in public spaces. A favorable impression may help engaging users in interaction and attaining acceptance for long-term interactions. We present three studies implementing a model of first impressions for initiating user interactions with an anthropomorphic museum guide agent with socio-relational skills. We focus on nonverbal behavior exhibiting personality and interpersonal attitude. In two laboratory studies, we demonstrate that impressions of an agents personality are quickly formed based on proximity, whereas interpersonal attitude is conveyed through smile and gaze. We also found that interpersonal attitude has greater impact than personality on the users decision to spend time with the agent. These findings are then applied to a museum guide agent exhibited at the Boston Museum of Science. In this field study, we show that employing our model increases the number of visitors engaging in interaction.
Toward Robotic Socially Believable Behaving Systems (II) | 2016
Chloé Clavel; Angelo Cafaro; Sabrina Campano; Catherine Pelachaud
Embodied conversational agents are capable of carrying a face-to-face interaction with users. Their use is substantially increasing in numerous applications ranging from tutoring systems to ambient assisted living. In such applications, one of the main challenges is to keep the user engaged in the interaction with the agent. The present chapter provides an overview of the scientific issues underlying the engagement paradigm, including a review on methodologies for assessing user engagement in human-agent interaction. It presents three studies that have been conducted within the Greta/VIB platforms. These studies aimed at designing engaging agents using different interaction strategies (alignment and dynamical coupling) and the expression of interpersonal attitudes in multi-party interactions.
intelligent virtual agents | 2014
Brian Ravenet; Angelo Cafaro; Magalie Ochs; Catherine Pelachaud
Embodied Conversational Agents have been widely used to simulate dyadic interactions with users. We want to explore the context of expression of interpersonal attitudes in simulated group conversations. We are presenting a model that allows agents to exhibit a variety of nonverbal behaviors (e.g gestures, facial expressions, proxemics) depending on the interpersonal attitudes that they want to express within a group while talking. The model combines corpus-based and theoretical-based approaches and we present a preliminary implementation of this model.
intelligent virtual agents | 2016
Florian Pecune; Angelo Cafaro; Magalie Ochs; Catherine Pelachaud
In this paper we evaluate a model of social decision-making for virtual agents. The model computes the social attitude of a virtual agent given its social role during the interaction and its social relation toward the interactant. The resulting attitude influences the agent’s social goals and therefore determines the decisions made by the agent in terms of actions and communicative intentions to accomplish. We conducted an empirical study in the context of virtual tutor-child interaction where participants evaluated the tutor’s perceived social attitude towards the child while the tutor’s social role and relation were manipulated by our model. Results showed that both role and social relation have an influence on the agent’s perceived social attitude.
motion in games | 2016
Brian Ravenet; Elisabetta Bevacqua; Angelo Cafaro; Magalie Ochs; Catherine Pelachaud
Virtual Reality and immersive experiences, which allow players to share the same virtual environment as the characters of a virtual world, have gained more and more interest recently. In order to conceive these immersive virtual worlds, one of the challenges is to give to the characters that populate them the ability to express behaviors that can support the immersion. In this work, we propose a model capable of controlling and simulating a conversational group of social agents in an immersive environment. We describe this model which has been previously validated using a regular screen setting and we present a study for measuring whether users recognized the attitudes expressed by virtual agents through the realtime generated animations of nonverbal behavior in an immersive setting. Results mirrored those of the regular screen setting thus providing further insights for improving players experiences by integrating them into immersive simulated group conversations with characters that express different interpersonal attitudes.
Ksii Transactions on Internet and Information Systems | 2016
Angelo Cafaro; Brian Ravenet; Magalie Ochs; Hannes Högni Vilhjálmsson; Catherine Pelachaud
In the everyday world people form small conversing groups where social interaction takes place, and much of the social behavior takes place through managing interpersonal space (i.e., proxemics) and group formation, signaling their attentio to others (i.e., through gaze behavior), and expressing certain attitudes, for example, friendliness, by smiling, getting close through increased engagement and intimacy, and welcoming newcomers. Many real-time interactive systems feature virtual anthropomorphic characters in order to simulate conversing groups and add plausibility and believability to the simulated environments. However, only a few have dealt with autonomous behavior generation, and in those cases, the agents’ exhibited behavior should be evaluated by users in terms of appropriateness, believability, and conveyed meaning (e.g., attitudes). In this article we present an integrated intelligent interactive system for generating believable nonverbal behavior exhibited by virtual agents in small simulated group conversations. The produced behavior supports group formation management and the expression of interpersonal attitudes (friendly vs. unfriendly) both among the agents in the group (i.e., in-group attitude) and towards an approaching user in an avatar-based interaction (out-group attitude). A user study investigating the effects of these attitudes on users’ social presence evaluation and proxemics behavior (with their avatar) in a three-dimensional virtual city environment is presented. We divided the study into two trials according to the task assigned to users, that is, joining a conversing group and reaching a target destination behind the group. Results showed that the out-group attitude had a major impact on social presence evaluations in both trials, whereby friendly groups were perceived as more socially rich. The user’s proxemics behavior depended on both out-group and in-group attitudes expressed by the agents. Implications of these results for the design and implementation of similar intelligent interactive systems for the autonomous generation of agents’ multimodal behavior are briefly discussed.
intelligent virtual agents | 2012
Angelo Cafaro; Hannes Högni Vilhjálmsson; Timothy W. Bickmore; Dirk Heylen; Kamilla R. Johannsdottir; Gunnar Steinn Valgarðsson