Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lola Cañamero is active.

Publication


Featured researches published by Lola Cañamero.


robot and human interactive communication | 2010

Towards an Affect Space for robots to display emotional body language

Aryel Beck; Lola Cañamero; Kim A. Bard

In order for robots to be socially accepted and generate empathy it is necessary that they display rich emotions. For robots such as Nao, body language is the best medium available given their inability to convey facial expressions. Displaying emotional body language that can be interpreted whilst interacting with the robot should significantly improve its sociability. This research investigates the creation of an Affect Space for the generation of emotional body language to be displayed by robots. To create an Affect Space for body language, one has to establish the contribution of the different positions of the joints to the emotional expression. The experiment reported in this paper investigated the effect of varying a robots head position on the interpretation, Valence, Arousal and Stance of emotional key poses. It was found that participants were better than chance level in interpreting the key poses. This finding confirms that body language is an appropriate medium for robot to express emotions. Moreover, the results of this study support the conclusion that Head Position is an important body posture variable. Head Position up increased correct identification for some emotion displays (pride, happiness, and excitement), whereas Head Position down increased correct identification for other displays (anger, sadness). Fear, however, was identified well regardless of Head Position. Head up was always evaluated as more highly Aroused than Head straight or down. Evaluations of Valence (degree of negativity to positivity) and Stance (degree to which the robot was aversive to approaching), however, depended on both Head Position and the emotion displayed. The effects of varying this single body posture variable were complex.


Proceedings of the 3rd international workshop on Affective interaction in natural environments | 2010

Interpretation of emotional body language displayed by robots

Aryel Beck; Antoine Hiolle; Alexandre Mazel; Lola Cañamero

In order for robots to be socially accepted and generate empathy they must display emotions. For robots such as Nao, body language is the best medium available, as they do not have the ability to display facial expressions. Displaying emotional body language that can be interpreted whilst interacting with the robot should greatly improve its acceptance.n This research investigates the creation of an Affect Space [1] for the generation of emotional body language that could be displayed by robots. An Affect Space is generated by blending (i.e. interpolating between) different emotional expressions to create new ones. An Affect Space for body language based on the Circumplex Model of emotions [2] has been created.n The experiment reported in this paper investigated the perception of specific key poses from the Affect Space. The results suggest that this Affect Space for body expressions can be used to improve the expressiveness of humanoid robots.n In addition, early results of a pilot study are described. It revealed that the context helps human subjects improve their recognition rate during a human-robot imitation game, and in turn this recognition leads to better outcome of the interactions.


Ksii Transactions on Internet and Information Systems | 2012

Emotional body language displayed by artificial agents

Aryel Beck; Brett Stevens; Kim A. Bard; Lola Cañamero

Complex and natural social interaction between artificial agents (computer-generated or robotic) and humans necessitates the display of rich emotions in order to be believable, socially relevant, and accepted, and to generate the natural emotional responses that humans show in the context of social interaction, such as engagement or empathy. Whereas some robots use faces to display (simplified) emotional expressions, for other robots such as Nao, body language is the best medium available given their inability to convey facial expressions. Displaying emotional body language that can be interpreted whilst interacting with the robot should significantly improve naturalness. This research investigates the creation of an affect space for the generation of emotional body language to be displayed by humanoid robots. To do so, three experiments investigating how emotional body language displayed by agents is interpreted were conducted. The first experiment compared the interpretation of emotional body language displayed by humans and agents. The results showed that emotional body language displayed by an agent or a human is interpreted in a similar way in terms of recognition. Following these results, emotional key poses were extracted from an actors performances and implemented in a Nao robot. The interpretation of these key poses was validated in a second study where it was found that participants were better than chance at interpreting the key poses displayed. Finally, an affect space was generated by blending key poses and validated in a third study. Overall, these experiments confirmed that body language is an appropriate medium for robots to display emotions and suggest that an affect space for body expressions can be used to improve the expressiveness of humanoid robots.


Adaptive Behavior | 2013

Hedonic value: enhancing adaptation for motivated agents

Ignasi Cos; Lola Cañamero; Gillian M. Hayes; Andrew Gillies

Reinforcement learning (RL) in the context of artificial agents is typically used to produce behavioral responses as a function of the reward obtained by interaction with the environment. When the problem consists of learning the shortest path to a goal, it is common to use reward functions yielding a fixed value after each decision, for example a positive value if the target location has been attained and a negative value at each intermediate step. However, this fixed strategy may be overly simplistic for agents to adapt to dynamic environments, in which resources may vary from time to time. By contrast, there is significant evidence that most living beings internally modulate reward value as a function of their context to expand their range of adaptivity. Inspired by the potential of this operation, we present a review of its underlying processes and we introduce a simplified formalization for artificial agents. The performance of this formalism is tested by monitoring the adaptation of an agent endowed with a model of motivated actor–critic, embedded with our formalization of value and constrained by physiological stability, to environments with different resource distribution. Our main result shows that the manner in which reward is internally processed as a function of the agent’s motivational state, strongly influences adaptivity of the behavioral cycles generated and the agent’s physiological stability.


International Journal of Social Robotics | 2013

Interpretation of Emotional Body Language Displayed by a Humanoid Robot: A Case Study with Children

Aryel Beck; Lola Cañamero; Antoine Hiolle; Luisa Damiano; Piero Cosi; Fabio Tesser; Giacomo Sommavilla

The work reported in this paper focuses on giving humanoid robots the capacity to express emotions with their body. Previous results show that adults are able to interpret different key poses displayed by a humanoid robot and also that changing the head position affects the expressiveness of the key poses in a consistent way. Moving the head down leads to decreased arousal (the level of energy) and valence (positive or negative emotion) whereas moving the head up produces an increase along these dimensions. Hence, changing the head position during an interaction should send intuitive signals. The study reported in this paper tested children’s ability to recognize the emotional body language displayed by a humanoid robot. The results suggest that body postures and head position can be used to convey emotions during child-robot interaction.


Adaptive Behavior | 2010

Learning Affordances of Consummatory Behaviors: Motivation-Driven Adaptive Perception

Ignasi Cos; Lola Cañamero; Gillian M. Hayes

This article introduces a formalization of the dynamics between sensorimotor interaction and homeostasis, integrated in a single architecture to learn object affordances of consummatory behaviors. We also describe the principles necessary to learn grounded knowledge in the context of an agent and its surrounding environment, which we use to investigate the constraints imposed by the agent’s internal dynamics and the environment. This is tested with an embodied, situated robot, in a simulated environment, yielding results that support this formalization. Furthermore, we show that this methodology allows learned affordances to be dynamically redefined, depending on object similarity, resource availability, and the rhythms of the agent’s internal physiology. For example, if a resource becomes increasingly scarce, the value assigned by the agent to its related effect increases accordingly, encouraging a more active behavioral strategy to maintain physiological stability. Experimental results also suggest that a combination of motivation-driven and affordance learning in a single architecture should simplify its overall complexity while increasing its adaptivity.


human robot interaction | 2016

Towards long-term social child-robot interaction: using multi-activity switching to engage young users

Alexandre Coninx; Paul Baxter; Elettra Oleari; Sara Bellini; Bert P.B. Bierman; Olivier A. Blanson Henkemans; Lola Cañamero; Piero Cosi; Valentin Enescu; Raquel Ros Espinoza; Antoine Hiolle; Rémi Humbert; Bernd Kiefer; Ivana Kruijff-Korbayová; Rosemarijn Looije; Marco Mosconi; Mark A. Neerincx; Giulio Paci; Georgios Patsis; Clara Pozzi; Francesca Sacchitelli; Hichem Sahli; Alberto Sanna; Giacomo Sommavilla; Fabio Tesser; Yiannis Demiris; Tony Belpaeme

Social robots have the potential to provide support in a number of practical domains, such as learning and behaviour change. This potential is particularly relevant for children, who have proven receptive to interactions with social robots. To reach learning and therapeutic goals, a number of issues need to be investigated, notably the design of an effective child-robot interaction (cHRI) to ensure the child remains engaged in the relationship and that educational goals are met. Typically, current cHRI research experiments focus on a single type of interaction activity (e.g. a game). However, these can suffer from a lack of adaptation to the child, or from an increasingly repetitive nature of the activity and interaction. In this paper, we motivate and propose a practicable solution to this issue: an adaptive robot able to switch between multiple activities within single interactions. We describe a system that embodies this idea, and present a case study in which diabetic children collaboratively learn with the robot about various aspects of managing their condition. We demonstrate the ability of our system to induce a varied interaction and show the potential of this approach both as an educational tool and as a research method for long-term cHRI.


Ksii Transactions on Internet and Information Systems | 2012

Eliciting caregiving behavior in dyadic human-robot attachment-like interactions

Antoine Hiolle; Lola Cañamero; Marina Davila-Ross; Kim A. Bard

We present here the design and applications of an arousal-based model controlling the behavior of a Sony AIBO robot during the exploration of a novel environment: a childrens play mat. When the robot experiences too many new perceptions, the increase of arousal triggers calls for attention towards its human caregiver. The caregiver can choose to either calm the robot down by providing it with comfort, or to leave the robot coping with the situation on its own. When the arousal of the robot has decreased, the robot moves on to further explore the play mat. We gathered results from two experiments using this arousal-driven control architecture. In the first setting, we show that such a robotic architecture allows the human caregiver to influence greatly the learning outcomes of the exploration episode, with some similarities to a primary caregiver during early childhood. In a second experiment, we tested how human adults behaved in a similar setup with two different robots: one “needy”, often demanding attention, and one more independent, requesting far less care or assistance. Our results show that human adults recognise each profile of the robot for what they have been designed, and behave accordingly to what would be expected, caring more for the needy robot than for the other. Additionally, the subjects exhibited a preference and more positive affect whilst interacting and rating the robot we designed as needy. This experiment leads us to the conclusion that our architecture and setup succeeded in eliciting positive and caregiving behavior from adults of different age groups and technological background. Finally, the consistency and reactivity of the robot during this dyadic interaction appeared crucial for the enjoyment and engagement of the human partner.


international conference on social robotics | 2011

Children interpretation of emotional body language displayed by a robot

Aryel Beck; Lola Cañamero; Luisa Damiano; Giacomo Sommavilla; Fabio Tesser; Piero Cosi

Previous results show that adults are able to interpret different key poses displayed by the robot and also that changing the head position affects the expressiveness of the key poses in a consistent way. Moving the head down leads to decreased arousal (the level of energy), valence (positive or negative) and stance (approaching or avoiding) whereas moving the head up produces an increase along these dimensions [1]. Hence, changing the head position during an interaction should send intuitive signals which could be used during an interaction. The ALIZ-E target group are children between the age of 8 and 11. Existing results suggest that they would be able to interpret human emotional body language [2, 3]. n nBased on these results, an experiment was conducted to test whether the results of [1] can be applied to children. If yes body postures and head position could be used to convey emotions during an interaction.


Frontiers in Neurorobotics | 2014

Arousal regulation and affective adaptation to human responsiveness by a robot that explores and learns a novel environment

Antoine Hiolle; Matthew Lewis; Lola Cañamero

In the context of our work in developmental robotics regarding robot–human caregiver interactions, in this paper we investigate how a “baby” robot that explores and learns novel environments can adapt its affective regulatory behavior of soliciting help from a “caregiver” to the preferences shown by the caregiver in terms of varying responsiveness. We build on two strands of previous work that assessed independently (a) the differences between two “idealized” robot profiles—a “needy” and an “independent” robot—in terms of their use of a caregiver as a means to regulate the “stress” (arousal) produced by the exploration and learning of a novel environment, and (b) the effects on the robot behaviors of two caregiving profiles varying in their responsiveness—“responsive” and “non-responsive”—to the regulatory requests of the robot. Going beyond previous work, in this paper we (a) assess the effects that the varying regulatory behavior of the two robot profiles has on the exploratory and learning patterns of the robots; (b) bring together the two strands previously investigated in isolation and take a step further by endowing the robot with the capability to adapt its regulatory behavior along the “needy” and “independent” axis as a function of the varying responsiveness of the caregiver; and (c) analyze the effects that the varying regulatory behavior has on the exploratory and learning patterns of the adaptive robot.

Collaboration


Dive into the Lola Cañamero's collaboration.

Top Co-Authors

Avatar

Matthew Lewis

University of Hertfordshire

View shared research outputs
Top Co-Authors

Avatar

Antoine Hiolle

University of Hertfordshire

View shared research outputs
Top Co-Authors

Avatar

Aryel Beck

University of Hertfordshire

View shared research outputs
Top Co-Authors

Avatar

Kim A. Bard

University of Portsmouth

View shared research outputs
Top Co-Authors

Avatar

Fabio Tesser

National Research Council

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Piero Cosi

National Research Council

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ignasi Cos

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar

John Lones

University of Hertfordshire

View shared research outputs
Researchain Logo
Decentralizing Knowledge