Ginevra Castellano
Uppsala University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ginevra Castellano.
affective computing and intelligent interaction | 2007
Ginevra Castellano; Santiago D. Villalba; Antonio Camurri
We present an approach for the recognition of acted emotional states based on the analysis of body movement and gesture expressivity. According to research showing that distinct emotions are often associated with different qualities of body movement, we use non- propositional movement qualities (e.g. amplitude, speed and fluidity of movement) to infer emotions, rather than trying to recognise different gesture shapes expressing specific emotions. We propose a method for the analysis of emotional behaviour based on both direct classification of time series and a model that provides indicators describing the dynamics of expressive motion cues. Finally we show and interpret the recognition rates for both proposals using different classification algorithms.
human-robot interaction | 2011
Jyotirmay Sanghvi; Ginevra Castellano; Iolanda Leite; André Pereira; Peter W. McOwan; Ana Paiva
The design of an affect recognition system for socially perceptive robots relies on representative data: human-robot interaction in naturalistic settings requires an affect recognition system to be trained and validated with contextualised affective expressions, that is, expressions that emerge in the same interaction scenario of the target application. In this paper we propose an initial computational model to automatically analyse human postures and body motion to detect engagement of children playing chess with an iCat robot that acts as a game companion. Our approach is based on vision-based automatic extraction of expressive postural features from videos capturing the behaviour of the children from a lateral view. An initial evaluation, conducted by training several recognition models with contextualised affective postural expressions, suggests that patterns of postural behaviour can be used to accurately predict the engagement of the children with the robot, thus making our approach suitable for integration into an affect recognition system for a game companion in a real world scenario.
Affect and Emotion in Human-Computer Interaction | 2008
Ginevra Castellano; Loic Kessous; George Caridakis
In this paper we present a multimodal approach for the recognition of eight emotions. Our approach integrates information from facial expressions, body movement and gestures and speech. We trained and tested a model with a Bayesian classifier, using a multimodal corpus with eight emotions and ten subjects. Firstly, individual classifiers were trained for each modality. Next, data were fused at the feature level and the decision level. Fusing the multimodal data resulted in a large increase in the recognition rates in comparison with the unimodal systems: the multimodal approach gave an improvement of more than 10% when compared to the most successful unimodal system. Further, the fusion performed at the feature level provided better results than the one performed at the decision level.
artificial intelligence applications and innovations | 2007
George Caridakis; Ginevra Castellano; Loic Kessous; Amaryllis Raouzaiou; Lori Malatesta; Stylianos Asteriadis; Kostas Karpouzis
In this paper we present a multimodal approach for the recognition of eight emotions that integrates information from facial expressions, body movement and gestures and speech. We trained and tested a model with a Bayesian classifier, using a multimodal corpus with eight emotions and ten subjects. First individual classifiers were trained for each modality. Then data were fused at the feature level and the decision level. Fusing multimodal data increased very much the recognition rates in comparison with the unimodal systems: the multimodal approach gave an improvement of more than 10% with respect to the most successful unimodal system. Further, the fusion performed at the feature level showed better results than the one performed at the decision level.
Journal on Multimodal User Interfaces | 2010
Loic Kessous; Ginevra Castellano; George Caridakis
In this paper a study on multimodal automatic emotion recognition during a speech-based interaction is presented. A database was constructed consisting of people pronouncing a sentence in a scenario where they interacted with an agent using speech. Ten people pronounced a sentence corresponding to a command while making 8 different emotional expressions. Gender was equally represented, with speakers of several different native languages including French, German, Greek and Italian. Facial expression, gesture and acoustic analysis of speech were used to extract features relevant to emotion. For the automatic classification of unimodal data, bimodal data and multimodal data, a system based on a Bayesian classifier was used. After performing an automatic classification of each modality, the different modalities were combined using a multimodal approach. Fusion of the modalities at the feature level (before running the classifier) and at the results level (combining results from classifier from each modality) were compared. Fusing the multimodal data resulted in a large increase in the recognition rates in comparison to the unimodal systems: the multimodal approach increased the recognition rate by more than 10% when compared to the most successful unimodal system. Bimodal emotion recognition based on all combinations of the modalities (i.e., ‘face-gesture’, ‘face-speech’ and ‘gesture-speech’) was also investigated. The results show that the best pairing is ‘gesture-speech’. Using all three modalities resulted in a 3.3% classification improvement over the best bimodal results.
international conference on multimodal interfaces | 2009
Ginevra Castellano; André Pereira; Iolanda Leite; Ana Paiva; Peter W. McOwan
Affect sensitivity is of the utmost importance for a robot companion to be able to display socially intelligent behaviour, a key requirement for sustaining long-term interactions with humans. This paper explores a naturalistic scenario in which children play chess with the iCat, a robot companion. A person-independent, Bayesian approach to detect the users engagement with the iCat robot is presented. Our framework models both causes and effects of engagement: features related to the users non-verbal behaviour, the task and the companions affective reactions are identified to predict the childrens level of engagement. An experiment was carried out to train and validate our model. Results show that our approach based on multimodal integration of task and social interaction-based features outperforms those based solely on non-verbal behaviour or contextual information (94.79 % vs. 93.75 % and 78.13 %).
Journal on Multimodal User Interfaces | 2010
Ginevra Castellano; Iolanda Leite; André Pereira; Carlos Martinho; Ana Paiva; Peter W. McOwan
Affect sensitivity is an important requirement for artificial companions to be capable of engaging in social interaction with human users. This paper provides a general overview of some of the issues arising from the design of an affect recognition framework for artificial companions. Limitations and challenges are discussed with respect to other capabilities of companions and a real world scenario where an iCat robot plays chess with children is presented. In this scenario, affective states that a robot companion should be able to recognise are identified and the non-verbal behaviours that are affected by the occurrence of these states in the children are investigated. The experimental results aim to provide the foundation for the design of an affect recognition system for a game companion: in this interaction scenario children tend to look at the iCat and smile more when they experience a positive feeling and they are engaged with the iCat.
human-robot interaction | 2012
Iolanda Leite; Ginevra Castellano; André Pereira; Carlos Martinho; Ana Paiva
The idea of autonomous social robots capable of assisting us in our daily lives is becoming more real every day. However, there are still many open issues regarding the social capabilities that those robots should have in order to make daily interactions with humans more natural. For example, the role of affective interactions is still unclear. This paper presents an ethnographic study conducted in an elementary school where 40 children interacted with a social robot capable of recognising and responding empathically to some of the childrens affective states. The findings suggest that the robots empathic behaviour affected positively how children perceived the robot. However, the empathic behaviours should be selected carefully, under the risk of having the opposite effect. The target application scenario and the particular preferences of children seem to influence the “degree of empathy” that social robots should be endowed with.
Proceedings of the International Workshop on Affective-Aware Virtual Agents and Social Robots | 2009
Christopher E. Peters; Ginevra Castellano; Sara de Freitas
Engagement is a concept of the utmost importance in human computer interaction, not only for informing the design and implementation of interfaces, but also for enabling more sophisticated interfaces capable of adapting to users. While the notion of engagement is actively being studied in a diverse set of domains, the term has been used to refer to a number of related, but different concepts. This paper represents a first attempt at exploring a number of important concepts that the term has been used to refer to, of relevance to both human-human and human-machine interaction modelling.
International Journal of Social Robotics | 2014
Iolanda Leite; Ginevra Castellano; André Pereira; Carlos Martinho; Ana Paiva
As a great number of robotic products are entering people’s lives, the question of how can they behave in order to sustain long-term interactions with users becomes increasingly more relevant. In this paper, we present an empathic model for social robots that aim to interact with children for extended periods of time. The application of this model to a scenario where a social robot plays chess with children is described. To evaluate the proposed model, we conducted a long-term study in an elementary school and measured children’s perception of social presence, engagement and social support.