Verena V. Hafner
Humboldt University of Berlin
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Verena V. Hafner.
IEEE Transactions on Evolutionary Computation | 2007
Pierre-Yves Oudeyer; Frédéric Kaplan; Verena V. Hafner
Exploratory activities seem to be intrinsically rewarding for children and crucial for their cognitive development. Can a machine be endowed with such an intrinsic motivation system? This is the question we study in this paper, presenting a number of computational systems that try to capture this drive towards novel or curious situations. After discussing related research coming from developmental psychology, neuroscience, developmental robotics, and active learning, this paper presents the mechanism of Intelligent Adaptive Curiosity, an intrinsic motivation system which pushes a robot towards situations in which it maximizes its learning progress. This drive makes the robot focus on situations which are neither too predictable nor too unpredictable, thus permitting autonomous mental development. The complexity of the robots activities autonomously increases and complex developmental sequences self-organize without being constructed in a supervised manner. Two experiments are presented illustrating the stage-like organization emerging with this mechanism. In one of them, a physical robot is placed on a baby play mat with objects that it can learn to manipulate. Experimental results show that the robot first spends time in situations which are easy to learn, then shifts its attention progressively to situations of increasing difficulty, avoiding situations in which nothing can be learned. Finally, these various results are discussed in relation to more complex forms of behavioral organization and data coming from developmental psychology
intelligent robots and systems | 2002
Max Lungarella; Verena V. Hafner; Rolf Pfeifer; Hiroshi Yokoi
In this paper, we present a first series of experiments with prototype artificial whiskers that have been developed in our laboratory. These experiments have been inspired by neuroscience research on real rats. In spite of the enormous potential of whiskers, they have to date not been systematically investigated and exploited by roboticists. Although the transduction mechanism is simple and straightforward, and the whiskers are currently used in a passive way only, the dynamics of the sensory signals resulting from the interaction with various textured surfaces is complex and has a rich information content. The experiments provide the foundation for future work including active sensing, whisker arrays, and cross-modal integration.
Lecture Notes in Computer Science | 2005
Verena V. Hafner; Frédéric Kaplan
In order to bootstrap shared communication systems, robots must have a non-verbal way to influence the attention of one another. This chapter presents an experiment in which a robot learns to interpret pointing gestures of another robot. We show that simple feature-based neural learning techniques permit reliably to discriminate between left and right pointing gestures. This is a first step towards more complex attention coordination behaviour. We discuss the results of this experiment in relation to possible developmental scenarios about how children learn to interpret pointing gestures.
Adaptive Behavior | 2005
Verena V. Hafner
More is known of the navigation skills of mice and rats than of any other vertebrate. The discovery of place cells (cells whose firing rate correlates with the spatial position of the animal) in the rat’s hippoc ampus has inspired various attempts to model these cells. This work presents one such model which has been optimized on simulated autonomous agents and implemented on a mobile robot which learns to navigate within its environment through exploration using vision as its main sensory modal ity. The artificial mouse robot aMouse, a mobile robot with active whiskers and omnidirectional vision, is presented as an ideal robotic platform to study rodent navigation. The visual field of the robot is similar to the large visual field of rats and mice, and its whisker system uses real rat whiskers for tex ture recognition. The paper suggests how tactile information from the active whisker array on the robot can be used as an additional sensory modality for the place cell model described earlier.
human-robot interaction | 2013
Sasa Bodiroza; Guillaume Doisy; Verena V. Hafner
To achieve an improved human-robot interaction it is necessary to allow the human participant to interact with the robot in a natural way. In this work, a gesture recognition algorithm, based on dynamic time warping, was implemented with a use-case scenario of natural interaction with a mobile robot. Inputs are gesture trajectories obtained using a Microsoft Kinect sensor. Trajectories are stored in the persons frame of reference. Furthermore, the recognition is position-invariant, meaning that only one learned sample is needed to recognize the same gesture performed at another position in the gestural space. In experiments, a set of gestures for a robot waiter was used to train the gesture recognition algorithm. The experimental results show that the proposed modifications of the standard gesture recognition algorithm improve the robustness of the recognition.
International Journal of Social Robotics | 2013
Guido Schillaci; Saša Bodiroža; Verena V. Hafner
The ability to share the attention with another individual is essential for having intuitive interaction. Two relatively simple, but important prerequisites for this, saliency detection and attention manipulation by the robot, are identified in the first part of the paper. By creating a saliency based attentional model combined with a robot ego-sphere and by adopting attention manipulation skills, the robot can engage in an interaction with a human and start an interaction game including objects as a first step towards a joint attention.We set up an interaction experiment in which participants could physically interact with a humanoid robot equipped with mechanisms for saliency detection and attention manipulation. We tested our implementation in four combinations of activated parts of the attention system, which resulted in four different behaviours.Our aim was to identify those physical and behavioural characteristics that need to be emphasised when implementing attentive mechanisms in robots, and to measure the user experience when interacting with a robot equipped with attentive mechanisms.We adopted two techniques for evaluating saliency detection and attention manipulation mechanisms in human-robot interaction: user experience as measured by qualitative and quantitative questions in questionnaires and proxemics estimated from recorded videos of the interactions.The robot’s level of interactiveness has been found to be positively correlated with user experience factors like excitement and robot factors like lifelikeness and intelligence, suggesting that robots must give as much feedback as possible in order to increase the intuitiveness of the interaction, even when performing only attentive behaviours. This was confirmed also by proxemics analysis: participants reacted more frenetically when the interaction was perceived as less satisfying. Improving the robot’s feedback capability could increase user satisfaction and decrease the probability of unexpected or incomprehensible user movements. Finally, multi-modal interaction (through arm and head movements) increased the level of interactiveness perceived by participants. Positive correlation has been found between the elegance of robot movements and user satisfaction.
Advanced Robotics | 2006
Frédéric Kaplan; Verena V. Hafner
This article presents a mathematical framework based on information theory to compare multivariate sensory streams. Central to this approach is the notion of configuration: a set of distances between information sources, statistically evaluated for a given time span. As information distances capture simultaneously effects of physical closeness, intermodality, functional relationship and external couplings, a configuration can be interpreted as a signature for specific patterns of activity. This provides ways for comparing activity sequences by viewing them as points in an activity space. Results of experiments with an autonomous robot illustrate how this framework can be used to perform unsupervised activity classification.
human-robot interaction | 2011
Guido Schillaci; Verena V. Hafner
Motor Babbling has been identified as a self-exploring behaviour adopted by infants and is fundamental for the development of more complex behaviours, self-awareness and social interaction skills. Here, we adopt this paradigm for the learning strategies of a humanoid robot that maps its random arm movements with its head movements, determined by the perception of its own body. Finally, we analyse three random movement strategies and experimentally test on a humanoid robot how they affect the learning speed.
Psychophysiology | 2012
Romy Frömer; Verena V. Hafner; Werner Sommer
We explored the feasibility of investigating complex goal-directed actions with event-related brain potentials by studying the aiming phase of throwing. A virtual reality environment was set up, allowing aimed throws at distant targets, with participants standing upright and moving relatively unrestrained. After a separate practice session, the contingent negative variation (CNV) was measured during preparation for a simple button release, unaimed throws, and aimed throws at targets of two levels of difficulty. Consistent with expectations, CNV amplitude was larger for all throwing conditions compared to button release. It further increased with task difficulty in the aimed throwing conditions, reflecting the increasing motor programming demands for more difficult goal-directed actions. Therefore, investigating throwing as an instance of complex goal-directed action with ERPs is feasible, opening interesting perspectives for future research.
HBU'12 Proceedings of the Third international conference on Human Behavior Understanding | 2012
Guido Schillaci; Bruno Lara; Verena V. Hafner
In this paper, we present internal simulations as a methodology for human behaviour recognition and understanding. The internal simulations consist of pairs of inverse forward models representing sensorimotor actions. The main advantage of this method is that it both serves for action selection and prediction as well as recognition. We present several human-robot interaction experiments where the robot can recognize the behaviour of the human reaching for objects.