Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Emmanuel Senft is active.

Publication


Featured researches published by Emmanuel Senft.


human robot interaction | 2016

From Characterising Three Years of HRI to Methodology and Reporting Recommendations

Paul Baxter; James Kennedy; Emmanuel Senft; Séverin Lemaignan; Tony Belpaeme

Human-Robot Interaction (HRI) research requires the integration and cooperation of multiple disciplines, technical and social, in order to make progress. In many cases using different motivations, each of these disciplines bring with them different assumptions and methodologies. We assess recent trends in the field of HRI by examining publications in the HRI conference over the past three years (over 100 full papers), and characterise them according to 14 categories. We focus primarily on aspects of methodology. From this, a series of practical recommendations based on rigorous guidelines from other research fields that have not yet become common practice in HRI are proposed. Furthermore, we explore the primary implications of the observed recent trends for the field more generally, in terms of both methodology and research directions. We propose that the interdisciplinary nature of HRI must be maintained, but that a common methodological approach provides a much needed frame of reference to facilitate rigorous future progress.


human-robot interaction | 2017

Child Speech Recognition in Human-Robot Interaction: Evaluations and Recommendations

James Kennedy; Séverin Lemaignan; Caroline Montassier; Pauline Lavalade; Bahar Irfan; Fotios Papadopoulos; Emmanuel Senft; Tony Belpaeme

An increasing number of human-robot interaction (HRI) studies are now taking place in applied settings with children. These interactions often hinge on verbal interaction to effectively achieve their goals. Great advances have been made in adult speech recognition and it is often assumed that these advances will carry over to the HRI domain and to interactions with children. In this paper, we evaluate a number of automatic speech recognition (ASR) engines under a variety of conditions, inspired by real-world social HRI conditions. Using the data collected we demonstrate that there is still much work to be done in ASR for child speech, with interactions relying solely on this modality still out of reach. However, we also make recommendations for child-robot interaction design in order to maximise the capability that does currently exist.


Paladyn: Journal of Behavioral Robotics | 2017

How to Build a Supervised Autonomous System for Robot-Enhanced Therapy for Children with Autism Spectrum Disorder

Pablo Gómez Esteban; Paul Baxter; Tony Belpaeme; Erik Billing; Haibin Cai; Hoang-Long Cao; Mark Coeckelbergh; Cristina Costescu; Daniel David; Albert De Beir; Yinfeng Fang; Zhaojie Ju; James Kennedy; Honghai Liu; Alexandre Mazel; Amit Kumar Pandey; Kathleen Richardson; Emmanuel Senft; Serge Thill; Greet Van de Perre; Bram Vanderborght; David Vernon; Hui Yu; Tom Ziemke

Abstract Robot-Assisted Therapy (RAT) has successfully been used to improve social skills in children with autism spectrum disorders (ASD) through remote control of the robot in so-called Wizard of Oz (WoZ) paradigms.However, there is a need to increase the autonomy of the robot both to lighten the burden on human therapists (who have to remain in control and, importantly, supervise the robot) and to provide a consistent therapeutic experience. This paper seeks to provide insight into increasing the autonomy level of social robots in therapy to move beyond WoZ. With the final aim of improved human-human social interaction for the children, this multidisciplinary research seeks to facilitate the use of social robots as tools in clinical situations by addressing the challenge of increasing robot autonomy.We introduce the clinical framework in which the developments are tested, alongside initial data obtained from patients in a first phase of the project using a WoZ set-up mimicking the targeted supervised-autonomy behaviour. We further describe the implemented system architecture capable of providing the robot with supervised autonomy.


international conference on social robotics | 2015

Higher Nonverbal Immediacy Leads to Greater Learning Gains in Child-Robot Tutoring Interactions

James Kennedy; Paul Baxter; Emmanuel Senft; Tony Belpaeme

Nonverbal immediacy has been positively correlated with cognitive learning gains in human-human interaction, but remains relatively under-explored in human-robot interaction contexts. This paper presents a study in which robot behaviour is derived from the principles of nonverbal immediacy. Both high and low immediacy behaviours are evaluated in a tutoring interaction with children where a robot teaches how to work out whether numbers are prime. It is found that children who interact with the robot exhibiting more immediate nonverbal behaviour make significant learning gains, whereas those interacting with the less immediate robot do not. A strong trend is found suggesting that the children can perceive the differences between conditions, supporting results from existing work with adults.


international conference on social robotics | 2015

SPARC: Supervised Progressively Autonomous Robot Competencies

Emmanuel Senft; Paul Baxter; James Kennedy; Tony Belpaeme

The Wizard-of-Oz robot control methodology is widely used and typically places a high burden of effort and attention on the human supervisor to ensure appropriate robot behaviour, which may distract from other aspects of the task engaged in. We propose that this load can be reduced by enabling the robot to learn online from the guidance of the supervisor to become progressively more autonomous: Supervised Progressively Autonomous Robot Competencies (SPARC). Applying this concept to the domain of Robot Assisted Therapy (RAT) for children with Autistic Spectrum Disorder, a novel methodology is employed to assess the effect of a learning robot on the workload of the human supervisor. A user study shows that controlling a learning robot enables supervisors to achieve similar task performance as with a non-learning robot, but with both fewer interventions and a reduced perception of workload. These results demonstrate the utility of the SPARC concept and its potential effectiveness to reduce load on human WoZ supervisors.


human robot interaction | 2016

Heart vs Hard Drive: Children Learn More From a Human Tutor Than a Social Robot

James Kennedy; Paul Baxter; Emmanuel Senft; Tony Belpaeme

The field of Human-Robot Interaction (HRI) is increasingly exploring the use of social robots for educating children. Commonly, non-academic audiences will ask how robots compare to humans in terms of learning outcomes. This question is also interesting for social roboticists as humans are often assumed to be an upper benchmark for social behaviour, which influences learning. This paper presents a study in which learning gains of children are compared when taught the same mathematics material by a robot tutor and a non-expert human tutor. Significant learning occurs in both conditions, but the children improve more with the human tutor. This difference is not statistically significant, but the effect sizes fall in line with findings from other literature showing that humans outperform technology for tutoring. We discuss these findings in the context of applying social robots in child education.


Pattern Recognition Letters | 2017

Supervised autonomy for online learning in human-robot interaction

Emmanuel Senft; Paul Baxter; James Kennedy; Séverin Lemaignan; Tony Belpaeme

Abstract When a robot is learning it needs to explore its environment and how its environment responds on its actions. When the environment is large and there are a large number of possible actions the robot can take, this exploration phase can take prohibitively long. However, exploration can often be optimised by letting a human expert guide the robot during its learning. Interactive machine learning, in which a human user interactively guides the robot as it learns, has been shown to be an effective way to teach a robot. It requires an intuitive control mechanism to allow the human expert to provide feedback on the robot’s progress. This paper presents a novel method which combines Reinforcement Learning and Supervised Progressively Autonomous Robot Competencies (SPARC). By allowing the user to fully control the robot and by treating rewards as implicit, SPARC aims to learn an action policy while maintaining human supervisory oversight of the robot’s behaviour. This method is evaluated and compared to Interactive Reinforcement Learning in a robot teaching task. Qualitative and quantitative results indicate that SPARC allows for safer and faster learning by the robot, whilst not placing a high workload on the human teacher.


human robot interaction | 2017

Leveraging Human Inputs in Interactive Machine Learning for Human Robot Interaction

Emmanuel Senft; Séverin Lemaignan; Paul Baxter; Tony Belpaeme

A key challenge of HRI is allowing robots to be adaptable, especially as robots are expected to penetrate society at large and to interact in unexpected environments with non-technical users. One way of providing this adaptability is to use Interactive Machine Learning, i.e. having a human supervisor included in the learning process who can steer the action selection and the learning in the desired direction. We ran a study exploring how people use numeric rewards to evaluate a robots behaviour and guide its learning. From the results we derive a number of challenges when designing learning robots: what kind of input should the human provide? How should the robot communicate its state or its intention? And how can the teaching process by made easier for human supervisors?


human robot interaction | 2016

Providing a Robot with Learning Abilities Improves its Perception by Users

Emmanuel Senft; Paul Baxter; James Kennedy; Séverin Lemaignan; Tony Belpaeme

Subjective appreciation and performance evaluation of a robot by users are two important dimensions for Human-Robot Interaction, especially as increasing numbers of people become involved with robots. As roboticists we have to carefully design robots to make the interaction as smooth and enjoyable as possible for the users, while maintaining good performance in the task assigned to the robot. In this paper, we examine the impact of providing a robot with learning capabilities on how users report the quality of the interaction in relation to objective performance. We show that humans tend to prefer interacting with a learning robot and will rate its capabilities higher even if the actual performance in the task was lower. We suggest that adding learning to a robot could reduce the apparent load felt by a user for a new task and improve the users evaluation of the system, thus facilitating the integration of such robots into existing work flows.


human robot interaction | 2015

When is it Better to Give Up?: Towards Autonomous Action Selection for Robot Assisted ASD Therapy

Emmanuel Senft; Paul Baxter; James Kennedy; Tony Belpaeme

Robot Assisted Therapy (RAT) for children with ASD has found promising applications. In this paper, we outline an autonomous action selection mechanism to extend current RAT approaches. This will include the ability to revert control of the therapeutic intervention to the supervising therapist. We suggest that in order to maintain the goals of therapy, sometimes it is better if the robot gives up.

Collaboration


Dive into the Emmanuel Senft's collaboration.

Top Co-Authors

Avatar

Tony Belpaeme

Plymouth State University

View shared research outputs
Top Co-Authors

Avatar

Paul Baxter

Plymouth State University

View shared research outputs
Top Co-Authors

Avatar

James Kennedy

Plymouth State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bahar Irfan

Plymouth State University

View shared research outputs
Top Co-Authors

Avatar

Daniel David

Icahn School of Medicine at Mount Sinai

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Albert De Beir

Vrije Universiteit Brussel

View shared research outputs
Top Co-Authors

Avatar

Bram Vanderborght

Vrije Universiteit Brussel

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge