Johannes Twiefel
University of Hamburg
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Johannes Twiefel.
international symposium on neural networks | 2015
Francisco Cruz; Johannes Twiefel; Sven Magg; Cornelius Weber; Stefan Wermter
Recently robots are being used more frequently as assistants in domestic scenarios. In this context we train an apprentice robot to perform a cleaning task using interactive reinforcement learning since it has been shown to be an efficient learning approach benefiting from human expertise for performing domestic tasks. The robotic agent obtains interactive feedback via a speech recognition system which is tested to work with five different microphones concerning their polar patterns and distance to the teacher to recognize sentences in different instruction classes. Moreover, the reinforcement learning approach uses situated affordances to allow the robot to complete the cleaning task in every episode anticipating when chosen actions are possible to be performed. Situated affordances and interaction allow to improve the convergence speed of reinforcement learning, and the results also show that the system is robust against wrong instructions that result from errors of the speech recognition system.
robot and human interactive communication | 2016
Johannes Twiefel; Xavier Hinaut; Marcelo Borghetti; Erik Strahl; Stefan Wermter
In this paper we present a multi-modal human robot interaction architecture which is able to combine information coming from different sensory inputs, and can generate feedback for the user which helps to teach him/her implicitly how to interact with the robot. The system combines vision, speech and language with inference and feedback. The system environment consists of a Nao robot which has to learn objects situated on a table only by understanding absolute and relative object locations uttered by the user and afterwards points on a desired object to show what it has learned. The results of a user study and performance test show the usefulness of the feedback produced by the system and also justify the usage of the system in a real-world applications, as its classification accuracy of multi-modal input is around 80.8%. In the experiments, the system was able to detect inconsistent input coming from different sensory modules in all cases and could generate useful feedback for the user from this information.
Archive | 2014
Stefan Heinrich; Pascal Folleher; Peer Springstübe; Erik Strahl; Johannes Twiefel; Cornelius Weber; Stefan Wermter
The development of humanoid robots for helping humans as well as for understanding the human cognitive system is of significant interest in science and technology. How to bridge the large gap between the needs of a natural human-robot interaction and the capabilities of recent humanoid platforms is an important but open question. In this paper we describe a system to teach a robot, based on a dialogue in natural language about its real environment in real time. For this, we integrate a fast object recognition method for the NAO humanoid robot and a hybrid ensemble learning mechanism. With a qualitative analysis we show the effectiveness of our system.
intelligent robots and systems | 2016
Francisco Cruz; German Ignacio Parisi; Johannes Twiefel; Stefan Wermter
Robots in domestic environments are receiving more attention, especially in scenarios where they should interact with parent-like trainers for dynamically acquiring and refining knowledge. A prominent paradigm for dynamically learning new tasks has been reinforcement learning. However, due to excessive time needed for the learning process, a promising extension has been made by incorporating an external parent-like trainer into the learning cycle in order to scaffold and speed up the apprenticeship using advice about what actions should be performed for achieving a goal. In interactive reinforcement learning, different uni-modal control interfaces have been proposed that are often quite limited and do not take into account multiple sensor modalities. In this paper, we propose the integration of audiovisual patterns to provide advice to the agent using multi-modal information. In our approach, advice can be given using either speech, gestures, or a combination of both. We introduce a neural network-based approach to integrate multi-modal information from uni-modal modules based on their confidence. Results show that multi-modal integration leads to a better performance of interactive reinforcement learning with the robot being able to learn faster with greater rewards compared to uni-modal scenarios.
international conference on artificial neural networks | 2017
Marian Tietz; Tayfun Alpay; Johannes Twiefel; Stefan Wermter
Ladder networks are a notable new concept in the field of semi-supervised learning by showing state-of-the-art results in image recognition tasks while being compatible with many existing neural architectures. We present the recurrent ladder network, a novel modification of the ladder network, for semi-supervised learning of recurrent neural networks which we evaluate with a phoneme recognition task on the TIMIT corpus. Our results show that the model is able to consistently outperform the baseline and achieve fully-supervised baseline performance with only 75% of all labels which demonstrates that the model is capable of using unsupervised data as an effective regulariser.
international conference on artificial neural networks | 2018
Leyuan Qu; Cornelius Weber; Egor Lakomkin; Johannes Twiefel; Stefan Wermter
End-to-end neural networks have shown promising results on large vocabulary continuous speech recognition (LVCSR) systems. However, it is challenging to integrate domain knowledge into such systems. Specifically, articulatory features (AFs) which are inspired by the human speech production mechanism can help in speech recognition. This paper presents two approaches to incorporate domain knowledge into end-to-end training: (a) fine-tuning networks which reuse hidden layer representations of AF extractors as input for ASR tasks; (b) progressive networks which combine articulatory knowledge by lateral connections from AF extractors. We evaluate the proposed approaches on the speech Wall Street Journal corpus and test on the eval92 standard evaluation dataset. Results show that both fine-tuning and progressive networks can integrate articulatory information into end-to-end learning and outperform previous systems.
human-agent interaction | 2017
Nikhil Churamani; Paul Anton; Marc Brügger; Erik Fließwasser; Thomas Hummel; Julius Mayer; Waleed Mustafa; Hwei Geok Ng; Thi Linh Chi Nguyen; Quan Nguyen; Marcus Soll; Sebastian Springenberg; Sascha S. Griffiths; Stefan Heinrich; Nicolás Navarro-Guerrero; Erik Strahl; Johannes Twiefel; Cornelius Weber; Stefan Wermter
Advancements in Human-Robot Interaction involve robots being more responsive and adaptive to the human user they are interacting with. For example, robots model a personalised dialogue with humans, adapting the conversation to accommodate the users preferences in order to allow natural interactions. This study investigates the impact of such personalised interaction capabilities of a human companion robot on its social acceptance, perceived intelligence and likeability in a human-robot interaction scenario. In order to measure this impact, the study makes use of an object learning scenario where the user teaches different objects to the robot using natural language. An interaction module is built on top of the learning scenario which engages the user in a personalised conversation before teaching the robot to recognise different objects. The two systems, i.e. with and without the interaction module, are compared with respect to how different users rate the robot on its intelligence and sociability. Although the system equipped with personalised interaction capabilities is rated lower on social acceptance, it is perceived as more intelligent and likeable by the users.
joint ieee international conference on development and learning and epigenetic robotics | 2016
Xavier Hinaut; Johannes Twiefel; Stefan Wermter
We present a Recurrent Neural Network (RNN), namely an Echo State Network (ESN), that performs sentence comprehension and can be used for Human-Robot Interaction (HRI). The RNN is trained to map sentence structures to meanings (i.e. predicates). We have previously shown that this ESN is able to generalize to unknown sentence structures. Moreover, it is able to learn English, French or both at the same time. The are two novelties presented here: (1) the encapsulation of this RNN in a ROS module enables one to use it in a robotic architecture like the Nao humanoid robot, and (2) the flexibility of the predicates it can learn to produce (e.g. extracting adjectives) enables one to use the model to explore language acquisition in a developmental approach.
international conference on artificial neural networks | 2014
Jorge Dávila-Chacón; Johannes Twiefel; Jindong Liu; Stefan Wermter
In this paper we propose an embodied approach to automatic speech recognition, where a humanoid robot adjusts its orientation to the angle that increases the signal-to-noise ratio of speech. In other words, the robot turns its face to ’hear’ the speaker better, similar to what people with auditory deficiencies do. The robot tracks a speaker with a binaural sound source localisation system (SSL) that uses spiking neural networks to model relevant areas in the mammalian auditory pathway for SSL. The accuracy of speech recognition is doubled when the robot orients towards the speaker in an optimal angle and listens only through one ear instead of averaging the input from both ears.
national conference on artificial intelligence | 2014
Johannes Twiefel; Timo Baumann; Stefan Heinrich; Stefan Wermter