Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Aryel Beck is active.

Publication


Featured researches published by Aryel Beck.


robot and human interactive communication | 2010

Towards an Affect Space for robots to display emotional body language

Aryel Beck; Lola Cañamero; Kim A. Bard

In order for robots to be socially accepted and generate empathy it is necessary that they display rich emotions. For robots such as Nao, body language is the best medium available given their inability to convey facial expressions. Displaying emotional body language that can be interpreted whilst interacting with the robot should significantly improve its sociability. This research investigates the creation of an Affect Space for the generation of emotional body language to be displayed by robots. To create an Affect Space for body language, one has to establish the contribution of the different positions of the joints to the emotional expression. The experiment reported in this paper investigated the effect of varying a robots head position on the interpretation, Valence, Arousal and Stance of emotional key poses. It was found that participants were better than chance level in interpreting the key poses. This finding confirms that body language is an appropriate medium for robot to express emotions. Moreover, the results of this study support the conclusion that Head Position is an important body posture variable. Head Position up increased correct identification for some emotion displays (pride, happiness, and excitement), whereas Head Position down increased correct identification for other displays (anger, sadness). Fear, however, was identified well regardless of Head Position. Head up was always evaluated as more highly Aroused than Head straight or down. Evaluations of Valence (degree of negativity to positivity) and Stance (degree to which the robot was aversive to approaching), however, depended on both Head Position and the emotion displayed. The effects of varying this single body posture variable were complex.


Ksii Transactions on Internet and Information Systems | 2012

Emotional body language displayed by artificial agents

Aryel Beck; Brett Stevens; Kim A. Bard; Lola Cañamero

Complex and natural social interaction between artificial agents (computer-generated or robotic) and humans necessitates the display of rich emotions in order to be believable, socially relevant, and accepted, and to generate the natural emotional responses that humans show in the context of social interaction, such as engagement or empathy. Whereas some robots use faces to display (simplified) emotional expressions, for other robots such as Nao, body language is the best medium available given their inability to convey facial expressions. Displaying emotional body language that can be interpreted whilst interacting with the robot should significantly improve naturalness. This research investigates the creation of an affect space for the generation of emotional body language to be displayed by humanoid robots. To do so, three experiments investigating how emotional body language displayed by agents is interpreted were conducted. The first experiment compared the interpretation of emotional body language displayed by humans and agents. The results showed that emotional body language displayed by an agent or a human is interpreted in a similar way in terms of recognition. Following these results, emotional key poses were extracted from an actors performances and implemented in a Nao robot. The interpretation of these key poses was validated in a second study where it was found that participants were better than chance at interpreting the key poses displayed. Finally, an affect space was generated by blending key poses and validated in a third study. Overall, these experiments confirmed that body language is an appropriate medium for robots to display emotions and suggest that an affect space for body expressions can be used to improve the expressiveness of humanoid robots.


International Journal of Social Robotics | 2013

Interpretation of Emotional Body Language Displayed by a Humanoid Robot: A Case Study with Children

Aryel Beck; Lola Cañamero; Antoine Hiolle; Luisa Damiano; Piero Cosi; Fabio Tesser; Giacomo Sommavilla

The work reported in this paper focuses on giving humanoid robots the capacity to express emotions with their body. Previous results show that adults are able to interpret different key poses displayed by a humanoid robot and also that changing the head position affects the expressiveness of the key poses in a consistent way. Moving the head down leads to decreased arousal (the level of energy) and valence (positive or negative emotion) whereas moving the head up produces an increase along these dimensions. Hence, changing the head position during an interaction should send intuitive signals. The study reported in this paper tested children’s ability to recognize the emotional body language displayed by a humanoid robot. The results suggest that body postures and head position can be used to convey emotions during child-robot interaction.


robot and human interactive communication | 2012

Children's adaptation in multi-session interaction with a humanoid robot

Marco Nalin; Ilaria Baroni; Ivana Kruijff-Korbayová; Lola Cañamero; Matthew Lewis; Aryel Beck; Heriberto Cuayáhuitl; Alberto Sanna

This work presents preliminary observations from a study of children (N=19, age 5-12) interacting in multiple sessions with a humanoid robot in a scenario involving game activities. The main purpose of the study was to see how their perception of the robot, their engagement, and their enjoyment of the robot as a companion evolve across multiple interactions, separated by one-two weeks. However, an interesting phenomenon was observed during the experiment: most of the children soon adapted to the behaviors of the robot, in terms of speech timing, speed and tone, verbal input formulation, nodding, gestures, etc. We describe the experimental setup and the system, and our observations and preliminary analysis results, which open interesting questions for further research.


Archive | 2011

An Event-Based Conversational System for the Nao Robot

Ivana Kruijff-Korbayová; Georgios Athanasopoulos; Aryel Beck; Piero Cosi; Heriberto Cuayáhuitl; Tomas Dekens; Valentin Enescu; Antoine Hiolle; Bernd Kiefer; Hichem Sahli; Marc Schröder; Giacomo Sommavilla; Fabio Tesser; Werner Verhelst

Conversational systems play an important role in scenarios without a keyboard, e.g., talking to a robot. Communication in human-robot interaction (HRI) ultimately involves a combination of verbal and non-verbal inputs and outputs. HRI systems must process verbal and non-verbal observations and execute verbal and non-verbal actions in parallel, to interpret and produce synchronized behaviours. The development of such systems involves the integration of potentially many components and ensuring a complex interaction and synchronization between them. Most work in spoken dialogue system development uses pipeline architectures. Some exceptions are [1, 17], which execute system components in parallel (weakly-coupled or tightly-coupled architectures). The latter are more promising for building adaptive systems, which is one of the goals of contemporary research systems.


international conference on social robotics | 2011

Children interpretation of emotional body language displayed by a robot

Aryel Beck; Lola Cañamero; Luisa Damiano; Giacomo Sommavilla; Fabio Tesser; Piero Cosi

Previous results show that adults are able to interpret different key poses displayed by the robot and also that changing the head position affects the expressiveness of the key poses in a consistent way. Moving the head down leads to decreased arousal (the level of energy), valence (positive or negative) and stance (approaching or avoiding) whereas moving the head up produces an increase along these dimensions [1]. Hence, changing the head position during an interaction should send intuitive signals which could be used during an interaction. The ALIZ-E target group are children between the age of 8 and 11. Existing results suggest that they would be able to interpret human emotional body language [2, 3]. Based on these results, an experiment was conducted to test whether the results of [1] can be applied to children. If yes body postures and head position could be used to convey emotions during an interaction.


human robot interaction | 2013

Multimodal child-robot interaction: building social bonds

Tony Belpaeme; Paul Baxter; Robin Read; Rachel Wood; Heriberto Cuayáhuitl; Bernd Kiefer; Stefania Racioppa; Ivana Kruijff-Korbayová; Georgios Athanasopoulos; Valentin Enescu; Rosemarijn Looije; Mark A. Neerincx; Yiannis Demiris; Raquel Ros-Espinoza; Aryel Beck; Lola Cañamero; Antione Hiolle; Matthew Lewis; Ilaria Baroni; Marco Nalin; Piero Cosi; Giulio Paci; Fabio Tesser; Giacomo Sommavilla; Rémi Humbert


Proceedings of the 3rd international workshop on Affective interaction in natural environments | 2010

Interpretation of emotional body language displayed by robots

Aryel Beck; Antoine Hiolle; Alexandre Mazel; Lola Cañamero


Cognitive Science | 2013

Using Perlin Noise to Generate Emotional Expressions in a Robot

Aryel Beck; Antoine Hiolle; Lola Cañamero


Archive | 2008

Extending the media equation to emotions: an approach for assessing realistic emotional characters

Aryel Beck; Brett Stevens; Kim A. Bard

Collaboration


Dive into the Aryel Beck's collaboration.

Top Co-Authors

Avatar

Lola Cañamero

University of Hertfordshire

View shared research outputs
Top Co-Authors

Avatar

Antoine Hiolle

University of Hertfordshire

View shared research outputs
Top Co-Authors

Avatar

Fabio Tesser

National Research Council

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Piero Cosi

National Research Council

View shared research outputs
Top Co-Authors

Avatar

Kim A. Bard

University of Portsmouth

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Valentin Enescu

Vrije Universiteit Brussel

View shared research outputs
Top Co-Authors

Avatar

Brett Stevens

University of Portsmouth

View shared research outputs
Top Co-Authors

Avatar

Luisa Damiano

University of Hertfordshire

View shared research outputs
Researchain Logo
Decentralizing Knowledge