Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Katharina J. Rohlfing is active.

Publication


Featured researches published by Katharina J. Rohlfing.


IEEE Transactions on Autonomous Mental Development | 2010

Integration of Action and Language Knowledge: A Roadmap for Developmental Robotics

Angelo Cangelosi; Giorgio Metta; Gerhard Sagerer; Stefano Nolfi; Chrystopher L. Nehaniv; Kerstin Fischer; Jun Tani; Tony Belpaeme; Giulio Sandini; Francesco Nori; Luciano Fadiga; Britta Wrede; Katharina J. Rohlfing; Elio Tuci; Kerstin Dautenhahn; Joe Saunders; Arne Zeschel

This position paper proposes that the study of embodied cognitive agents, such as humanoid robots, can advance our understanding of the cognitive development of complex sensorimotor, linguistic, and social learning skills. This in turn will benefit the design of cognitive robots capable of learning to handle and manipulate objects and tools autonomously, to cooperate and communicate with other robots and humans, and to adapt their abilities to changing internal, environmental, and social conditions. Four key areas of research challenges are discussed, specifically for the issues related to the understanding of: 1) how agents learn and represent compositional actions; 2) how agents learn and represent compositional lexica; 3) the dynamics of social interaction and learning; and 4) how compositional action and language representations are integrated to bootstrap the cognitive system. The review of specific issues and progress in these areas is then translated into a practical roadmap based on a series of milestones. These milestones provide a possible set of cognitive robotics goals and test scenarios, thus acting as a research roadmap for future work on cognitive developmental robotics.


Advanced Robotics | 2006

How can multimodal cues from child-directed interaction reduce learning complexity in robots?

Katharina J. Rohlfing; Jannik Fritsch; Britta Wrede; Tanja Jungmann

Robots have to deal with an enormous amount of sensory stimuli. One solution in making sense of them is to enable a robot system to actively search for cues that help structuring the information. Studies with infants reveal that parents support the learning-process by modifying their interaction style, dependent on their childs developmental age. In our study, in which parents demonstrated everyday actions to their preverbal children (8–11 months old), our aim was to identify objective parameters for multimodal action modification. Our results reveal two action parameters being modified in adult–child interaction: roundness and pace. Furthermore, we found that language has the power to help children structuring actions sequences by synchrony and emphasis. These insights are discussed with respect to the built-in attention architecture of a socially interactive robot, which enables it to understand demonstrated actions. Our algorithmic approach towards automatically detecting the task structure in child-designed input demonstrates the potential impact of insights from developmental learning on robotics. The presented findings pave the way to automatically detect when to imitate in a demonstration task.


International Journal of Social Robotics | 2012

Generation and Evaluation of Communicative Robot Gesture

Maha Salem; Stefan Kopp; Ipke Wachsmuth; Katharina J. Rohlfing; Frank Joublin

How is communicative gesture behavior in robots perceived by humans? Although gesture is crucial in social interaction, this research question is still largely unexplored in the field of social robotics. Thus, the main objective of the present work is to investigate how gestural machine behaviors can be used to design more natural communication in social robots. The chosen approach is twofold. Firstly, the technical challenges encountered when implementing a speech-gesture generation model on a robotic platform are tackled. We present a framework that enables the humanoid robot to flexibly produce synthetic speech and co-verbal hand and arm gestures at run-time, while not being limited to a predefined repertoire of motor actions. Secondly, the achieved flexibility in robot gesture is exploited in controlled experiments. To gain a deeper understanding of how communicative robot gesture might impact and shape human perception and evaluation of human-robot interaction, we conducted a between-subjects experimental study using the humanoid robot in a joint task scenario. We manipulated the non-verbal behaviors of the robot in three experimental conditions, so that it would refer to objects by utilizing either (1) unimodal (i.e., speech only) utterances, (2) congruent multimodal (i.e., semantically matching speech and gesture) or (3) incongruent multimodal (i.e., semantically non-matching speech and gesture) utterances. Our findings reveal that the robot is evaluated more positively when non-verbal behaviors such as hand and arm gestures are displayed along with speech, even if they do not semantically match the spoken utterance.


Language Learning and Development | 2008

Socio-Pragmatics and Attention: Contributions to Gesturally Guided Word Learning in Toddlers

Amy E. Booth; Karla K. McGregor; Katharina J. Rohlfing

It is clear that gestural cues facilitate early word learning. In hopes of illuminating the relative contributions of attentional and socio-pragmatic factors to the mechanisms by which these cues exert their influence, we taught toddlers novel words with the support of a hierarchy of gestural cues. Twenty-eight- to 31-month-olds heard one of two possible referents labeled with a novel word, while the experimenter gazed at or gazed at and pointed to, touched, or manipulated the target. Learning improved with greater redundancy among cues, with the largest improvement evident when pointing was added to gazing. Looking times revealed that attentional factors accounted for only a small fraction of the variance in performance. Indeed, a significant increase in attention driven by manipulation of the target failed to improve learning. The results therefore suggest a strong role for socio-pragmatic factors in supporting the facilitative effect of gestural cues on word learning.


Journal of Child Language | 2009

Gesture as a support for word learning: the case of under.

Karla K. McGregor; Katharina J. Rohlfing; Allison Bean; Ellen Marschner

ABSTRACTForty children, aged 1 ; 8-2 ; 0, participated in one of three training conditions meant to enhance their comprehension of the spatial term under: the +Gesture group viewed a symbolic gesture for under during training; those in the +Photo group viewed a still photograph of objects in the under relationship; those in the Model Only group did not receive supplemental symbolic support. Childrens knowledge of under was measured before, immediately after, and two to three days after training. A gesture advantage was revealed when the gains exhibited by the groups on untrained materials (but not trained materials) were compared at delayed post-test (but not immediate post-test). Gestured input promoted more robust knowledge of the meaning of under, knowledge that was less tied to contextual familiarity and more prone to consolidation. Gestured input likely reduced cognitive load while emphasizing both the location and the movement relevant to the meaning of under.


IEEE Transactions on Autonomous Mental Development | 2013

Young Children’s Dialogical Actions: The Beginnings of Purposeful Intersubjectivity

Joanna Raczaszek-Leonardi; Iris Nomikou; Katharina J. Rohlfing

Are higher-level cognitive processes the only way that purposefulness can be introduced into the human interaction? In this paper, we provide a microanalysis of early mother-child interactions and argue that the beginnings of joint intentionality can be traced to the practice of embedding the childs actions into culturally shaped episodes. As action becomes coaction, an infants perception becomes tuned to interaction affordances.


IEEE Transactions on Autonomous Mental Development | 2009

Attention via Synchrony: Making Use of Multimodal Cues in Social Learning

Matthias Rolf; Marc Hanheide; Katharina J. Rohlfing

Infants learning about their environment are confronted with many stimuli of different modalities. Therefore, a crucial problem is how to discover which stimuli are related, for instance, in learning words. In making these multimodal ldquobindings,rdquo infants depend on social interaction with a caregiver to guide their attention towards relevant stimuli. The caregiver might, for example, visually highlight an object by shaking it while vocalizing the objects name. These cues are known to help structuring the continuous stream of stimuli. To detect and exploit them, we propose a model of bottom-up attention by multimodal signal-level synchrony. We focus on the guidance of visual attention from audio-visual synchrony informed by recent adult-infant interaction studies. Consequently, we demonstrate that our model is receptive to parental cues during child-directed tutoring. The findings discussed in this paper are consistent with recent results from developmental psychology but for the first time are obtained employing an objective, computational model. The presence of ldquomultimodal mothereserdquo is verified directly on the audio-visual signal. Lastly, we hypothesize how our computational model facilitates tutoring interaction and discuss its application in interactive learning scenarios, enabling social robots to benefit from adult-like tutoring.


robot and human interactive communication | 2011

A friendly gesture: Investigating the effect of multimodal robot behavior in human-robot interaction

Maha Salem; Katharina J. Rohlfing; Stefan Kopp; Frank Joublin

Gesture is an important feature of social interaction, frequently used by human speakers to illustrate what speech alone cannot provide, e.g. to convey referential, spatial or iconic information. Accordingly, humanoid robots that are intended to engage in natural human-robot interaction should produce speech-accompanying gestures for comprehensible and believable behavior. But how does a robots non-verbal behavior influence human evaluation of communication quality and the robot itself? To address this research question we conducted two experimental studies. Using the Honda humanoid robot we investigated how humans perceive various gestural patterns performed by the robot as they interact in a situational context. Our findings suggest that the robot is evaluated more positively when non-verbal behaviors such as hand and arm gestures are displayed along with speech. These findings were found to be enhanced when the participants were explicitly requested to direct their attention towards the robot during the interaction.


International Journal of Social Robotics | 2012

Tutor Spotter: Proposing a Feature Set and Evaluating It in a Robotic System

Katrin Solveig Lohan; Katharina J. Rohlfing; Karola Pitsch; Joe Saunders; Hagen Lehmann; Chrystopher L. Nehaniv; Kerstin Fischer; Britta Wrede

From learning by observation, robotic research has moved towards investigations of learning by interaction. This research is inspired by findings from developmental studies on human children and primates pointing to the fact that learning takes place in a social environment. Recently, driven by the idea that learning through observation or imitation is limited because the observed action not always reveals its meaning, scaffolding or bootstrapping processes supporting learning received increased attention. However, in order to take advantage of teaching strategies, a system needs to be sensitive to a tutor as children are. We therefore developed a module allowing for spotting the tutor by monitoring her or his gaze and detecting modifications in object presentation in form of a looming action. In this article, we will present the current state of the development of our contingency detection system as a set of features.


Journal of Cognition and Culture | 2003

Situatedness: The interplay between context(s) and situation

Katharina J. Rohlfing; Matthias Rehm; Karl Ulrich Goecke

In order to interpret the behaviour of cognitive systems, the integration into their specific cultural environment must be considered. The phenomenon of situatedness is a crucial determinant of this behaviour. We derive the notion of situatedness from the interplay between agent, situation, and context (divided into inter- and intracontext). The main objective of this paper is to connect a theoretical analysis of situatedness with its implications for empirical research. In particular, we consider processes of situated learning in natural and artificial systems.

Collaboration


Dive into the Katharina J. Rohlfing's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Karola Pitsch

University of Duisburg-Essen

View shared research outputs
Top Co-Authors

Avatar

Kerstin Fischer

University of Southern Denmark

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joe Saunders

University of Hertfordshire

View shared research outputs
Researchain Logo
Decentralizing Knowledge