Antoine Hiolle
University of Hertfordshire
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Antoine Hiolle.
International Journal of Social Robotics | 2013
Aryel Beck; Lola Cañamero; Antoine Hiolle; Luisa Damiano; Piero Cosi; Fabio Tesser; Giacomo Sommavilla
The work reported in this paper focuses on giving humanoid robots the capacity to express emotions with their body. Previous results show that adults are able to interpret different key poses displayed by a humanoid robot and also that changing the head position affects the expressiveness of the key poses in a consistent way. Moving the head down leads to decreased arousal (the level of energy) and valence (positive or negative emotion) whereas moving the head up produces an increase along these dimensions. Hence, changing the head position during an interaction should send intuitive signals. The study reported in this paper tested children’s ability to recognize the emotional body language displayed by a humanoid robot. The results suggest that body postures and head position can be used to convey emotions during child-robot interaction.
human robot interaction | 2016
Alexandre Coninx; Paul Baxter; Elettra Oleari; Sara Bellini; Bert P.B. Bierman; Olivier A. Blanson Henkemans; Lola Cañamero; Piero Cosi; Valentin Enescu; Raquel Ros Espinoza; Antoine Hiolle; Rémi Humbert; Bernd Kiefer; Ivana Kruijff-Korbayová; Rosemarijn Looije; Marco Mosconi; Mark A. Neerincx; Giulio Paci; Georgios Patsis; Clara Pozzi; Francesca Sacchitelli; Hichem Sahli; Alberto Sanna; Giacomo Sommavilla; Fabio Tesser; Yiannis Demiris; Tony Belpaeme
Social robots have the potential to provide support in a number of practical domains, such as learning and behaviour change. This potential is particularly relevant for children, who have proven receptive to interactions with social robots. To reach learning and therapeutic goals, a number of issues need to be investigated, notably the design of an effective child-robot interaction (cHRI) to ensure the child remains engaged in the relationship and that educational goals are met. Typically, current cHRI research experiments focus on a single type of interaction activity (e.g. a game). However, these can suffer from a lack of adaptation to the child, or from an increasingly repetitive nature of the activity and interaction. In this paper, we motivate and propose a practicable solution to this issue: an adaptive robot able to switch between multiple activities within single interactions. We describe a system that embodies this idea, and present a case study in which diabetic children collaboratively learn with the robot about various aspects of managing their condition. We demonstrate the ability of our system to induce a varied interaction and show the potential of this approach both as an educational tool and as a research method for long-term cHRI.
Archive | 2011
Ivana Kruijff-Korbayová; Georgios Athanasopoulos; Aryel Beck; Piero Cosi; Heriberto Cuayáhuitl; Tomas Dekens; Valentin Enescu; Antoine Hiolle; Bernd Kiefer; Hichem Sahli; Marc Schröder; Giacomo Sommavilla; Fabio Tesser; Werner Verhelst
Conversational systems play an important role in scenarios without a keyboard, e.g., talking to a robot. Communication in human-robot interaction (HRI) ultimately involves a combination of verbal and non-verbal inputs and outputs. HRI systems must process verbal and non-verbal observations and execute verbal and non-verbal actions in parallel, to interpret and produce synchronized behaviours. The development of such systems involves the integration of potentially many components and ensuring a complex interaction and synchronization between them. Most work in spoken dialogue system development uses pipeline architectures. Some exceptions are [1, 17], which execute system components in parallel (weakly-coupled or tightly-coupled architectures). The latter are more promising for building adaptive systems, which is one of the goals of contemporary research systems.
Ksii Transactions on Internet and Information Systems | 2012
Antoine Hiolle; Lola Cañamero; Marina Davila-Ross; Kim A. Bard
We present here the design and applications of an arousal-based model controlling the behavior of a Sony AIBO robot during the exploration of a novel environment: a childrens play mat. When the robot experiences too many new perceptions, the increase of arousal triggers calls for attention towards its human caregiver. The caregiver can choose to either calm the robot down by providing it with comfort, or to leave the robot coping with the situation on its own. When the arousal of the robot has decreased, the robot moves on to further explore the play mat. We gathered results from two experiments using this arousal-driven control architecture. In the first setting, we show that such a robotic architecture allows the human caregiver to influence greatly the learning outcomes of the exploration episode, with some similarities to a primary caregiver during early childhood. In a second experiment, we tested how human adults behaved in a similar setup with two different robots: one “needy”, often demanding attention, and one more independent, requesting far less care or assistance. Our results show that human adults recognise each profile of the robot for what they have been designed, and behave accordingly to what would be expected, caring more for the needy robot than for the other. Additionally, the subjects exhibited a preference and more positive affect whilst interacting and rating the robot we designed as needy. This experiment leads us to the conclusion that our architecture and setup succeeded in eliciting positive and caregiving behavior from adults of different age groups and technological background. Finally, the consistency and reactivity of the robot during this dyadic interaction appeared crucial for the enjoyment and engagement of the human partner.
Frontiers in Neurorobotics | 2014
Antoine Hiolle; Matthew Lewis; Lola Cañamero
In the context of our work in developmental robotics regarding robot–human caregiver interactions, in this paper we investigate how a “baby” robot that explores and learns novel environments can adapt its affective regulatory behavior of soliciting help from a “caregiver” to the preferences shown by the caregiver in terms of varying responsiveness. We build on two strands of previous work that assessed independently (a) the differences between two “idealized” robot profiles—a “needy” and an “independent” robot—in terms of their use of a caregiver as a means to regulate the “stress” (arousal) produced by the exploration and learning of a novel environment, and (b) the effects on the robot behaviors of two caregiving profiles varying in their responsiveness—“responsive” and “non-responsive”—to the regulatory requests of the robot. Going beyond previous work, in this paper we (a) assess the effects that the varying regulatory behavior of the two robot profiles has on the exploratory and learning patterns of the robots; (b) bring together the two strands previously investigated in isolation and take a step further by endowing the robot with the capability to adapt its regulatory behavior along the “needy” and “independent” axis as a function of the varying responsiveness of the caregiver; and (c) analyze the effects that the varying regulatory behavior has on the exploratory and learning patterns of the adaptive robot.
international conference on social robotics | 2010
Antoine Hiolle; Lola Cañamero; Pierre Andry; Arnaud J. Blanchard; Philippe Gaussier
In this paper, we present the results of a pilot study of a human robot interaction experiment where the rhythm of the interaction is used as a reinforcement signal to learn sensorimotor associations. The algorithm uses breaks and variations in the rhythm at which the human is producing actions. The concept is based on the hypothesis that a constant rhythm is an intrinsic property of a positive interaction whereas a break reflects a negative event. Subjects from various backgrounds interacted with a NAO robot where they had to teach the robot to mirror their actions by learning the correct sensorimotor associations. The results show that in order for the rhythm to be a useful reinforcement signal, the subjects have to be convinced that the robot is an agent with which they can act naturally, using their voice and facial expressions as cues to help it understand the correct behaviour to learn. When the subjects do behave naturally, the rhythm and its variations truly reflects how well the interaction is going and helps the robot learn efficiently. These results mean that non-expert users can interact naturally and fruitfully with an autonomous robot if the interaction is believed to be natural, without any technical knowledge of the cognitive capacities of the robot.
affective computing and intelligent interaction | 2007
Antoine Hiolle; Lola Cañamero; Arnaud J. Blanchard
To build autonomous robots able to live and interact with humans in a real-world dynamic and uncertain environment, the design of architectures permitting robots to develop attachment bonds to humans and use them to build their own model of the world is a promising avenue, not only to improve human-robot interaction and adaptation to the environment, but also as a way to develop further cognitive and emotional capabilities. In this paper we present a neural architecture to enable a robot to develop an attachment bond with a person or an object, and to discover the correct sensorimotor associations to maintain a desired affective state of well-being using a minimum amount of prior knowledge about the possible interactions with this object.
robot and human interactive communication | 2014
Sandra Costa; Filomena Soares; Ana Paula da Silva Pereira; Cristina P. Santos; Antoine Hiolle
This paper presents an exploratory study in which children with autism interact with ZECA (Zeno Engaging Children with Autism). ZECA is a humanoid robot with a face covered with a material allowing the display of varied facial expressions. The study investigates a novel scenario for robot-assisted play, to help promoting labelling of emotions by children with autism spectrum disorders (ASD). The study was performed during three sessions with two boys diagnosed with ASD. The results obtained from the analysis of the childrens behaviours while interacting with ZECA helped us improve several aspects of our game scenario such as the technical specificities of the game and its dynamics, and the experimental setup. The software produced for this study allows the robot to autonomously identify the answers of the child during the session. This automatic identification helped the fluidity of the game and freed the experimenter to participate in triadic interactions with the child. The evaluation of the game scenario that will be used in a future study was the main goal of this pilot study, rather than to quantify and evaluate the performance of the children. Overall, this exploratory study in teaching children about labelling emotions using a humanoid robot embedded in a game scenario demonstrated the possible positive outcomes this child-robot interaction can produce and highlighted the issues regarding data collection and their analysis that will inform future studies.
robot and human interactive communication | 2009
John Murray; Lola Cañamero; Antoine Hiolle
In this paper we present a robotic head designed for interaction with humans, endowed with mechanisms to make the robot respond to social interaction with emotional expressions, allowing the emotional expression of the robot to be directly influenced by the social interaction process. We look into how emotionally expressive visual feedback from the robot can enrich the interaction process and provide the participant with additional information regarding the interaction, allowing the user to better understand the intentions of the robot. We discuss some of the interactions that are possible with ERWIN and how this can effect the response of the system. We show experimental scenarios where the interaction processes influences the emotional expressions and how the participants interpret this. We draw our conclusions from the feedback from experiments, showing that indeed emotional expression can have an influence on the social interaction between a robot and human.
Archive | 2009
Lori Malatesta; John Murray; Amaryllis Raouzaiou; Antoine Hiolle; Lola Cañamero; Kostas Karpouzis
As research has revealed the deep role that emotion and emotion expression play in human social interaction, researchers in human-computer interaction have proposed that more effective human-computer interfaces can be realized if the interface models the user’s emotion as well as expresses emotions. Affective computing was defined by Rosalind Picard (1997) as computing that relates to, arises from, or deliberately influences emotion or other affective phenomena. According to Picard’s pioneering book, if we want computers to be genuinely intelligent and to interact naturally with us, we must give computers the ability to recognize, understand, and even to have and express emotions. These positions have become the foundations of research in the area and have been investigated in great depth after their first postulation. Emotion is fundamental to human experience, influencing cognition, perception, and everyday tasks such as learning, communication, and even rational decision making. Affective computing aspires to bridge the gap that typical human-computer interaction largely ignored thus creating an often frustrating experience for people, in part because affect had been overlooked or hard to measure. In order to take these ideas a step further, towards the objectives of practical applications, we need to adapt methods of modelling affect to the requirements of particularshowcases. To do so, it is fundamental to review prevalent psychology theories on emotion, to disambiguate their terminology and identify the fitting computational models that can allow for affective interactions in the desired environments.