Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Katrin Solveig Lohan is active.

Publication


Featured researches published by Katrin Solveig Lohan.


International Journal of Social Robotics | 2012

Tutor Spotter: Proposing a Feature Set and Evaluating It in a Robotic System

Katrin Solveig Lohan; Katharina J. Rohlfing; Karola Pitsch; Joe Saunders; Hagen Lehmann; Chrystopher L. Nehaniv; Kerstin Fischer; Britta Wrede

From learning by observation, robotic research has moved towards investigations of learning by interaction. This research is inspired by findings from developmental studies on human children and primates pointing to the fact that learning takes place in a social environment. Recently, driven by the idea that learning through observation or imitation is limited because the observed action not always reveals its meaning, scaffolding or bootstrapping processes supporting learning received increased attention. However, in order to take advantage of teaching strategies, a system needs to be sensitive to a tutor as children are. We therefore developed a module allowing for spotting the tutor by monitoring her or his gaze and detecting modifications in object presentation in form of a looming action. In this article, we will present the current state of the development of our contingency detection system as a set of features.


human-robot interaction | 2012

Levels of embodiment: linguistic analyses of factors influencing hri

Kerstin Fischer; Katrin Solveig Lohan; Kilian A. Foth

In this paper, we investigate the role of physical embodiment of a robot and its degrees of freedom in HRI. Both factors have been suggested to be relevant in definitions of embodiment, and so far we do not understand their effects on the way people interact with robots very well. Linguistic analyses of verbal interactions with robots differing with respect to physical embodiment and degrees of freedom provide a useful methodology to investigate factors conditioning human-robot interaction. Results show that both physical embodiment and degrees of freedom influence interaction, and that the effect of physical embodiment is located in the interpersonal domain, concerning in how far the robot is perceived as an interaction partner, whereas degrees of freedom influence the way users project the suitability of the robot for the current task.


robot and human interactive communication | 2012

Better be reactive at the beginning. Implications of the first seconds of an encounter for the tutoring style in human-robot-interaction

Karola Pitsch; Katrin Solveig Lohan; Katharina J. Rohlfing; Joe Saunders; Chrystopher L. Nehaniv; Britta Wrede

The paper investigates the effects of a robots “on-line” feedback during a tutoring situation with a human tutor. Analysis is based on a study conducted with an iCub robot that autonomously generates its feedback (gaze, pointing gesture) based on the systems perception of the tutors actions using the idea of reciprocity of actions. Sequential micro-analysis of two opposite cases reveals how the robots behavior (responsive vs. non-responsive) pro-actively shapes the tutors conduct and thus co-produces the way in which it is being tutored. A dialogic and a monologic tutoring style are distinguished. The first 20 seconds of an encounter are found to shape the users perception and expectations of the systems competences and lead to a relatively stable tutoring style even if the robots reactivity and appropriateness of feedback changes.


international conference on development and learning | 2010

Developing feedback: How children of different age contribute to a tutoring interaction with adults

Anna-Lisa Vollmer; Karola Pitsch; Katrin Solveig Lohan; Jannik Fritsch; Katharina J. Rohlfing; Britta Wrede

Learning is a social and interactional endeavor, in which the learner generally receives support from his/her social environment [1]. In this process, the learners feedback is important as it provides information about the learners current understanding which, in turn, enables the tutor to adjust his/her presentation accordingly [2], [3]. Thus, through their feedback learners can actively shape the tutors presentation — a resource which is highly valuable, if we aim at enabling robot systems to learn from a tutor in social interaction. But what kind of feedback should a robot produce and at which time? In this paper, we analyze the interaction between parents and their infants (8 to 30 months) in a tutoring scenario with regard to the feedback provided by the learner in three different age groups. Our combined qualitative and quantitative analysis reveals which features of the feedback change with the infants progressing age and cognitive capabilities.


international conference on development and learning | 2011

Contingency allows the robot to spot the tutor and to learn from interaction

Katrin Solveig Lohan; Karola Pitsch; Katharina J. Rohlfing; Kerstin Fischer; Joe Saunders; Hagen Lehmann; Chrystopher L. Nehaniv; Britta Wrede

Aiming at artificial system learning from a human tutor elicit tutoring behavior, which we implemented on the robotic platform iCub. For the evaluation of the system with users, we considered a contingency module that is developed to elicit tutoring behavior, which we then evaluate by implementing this module on the robotic platform iCub and within an interaction with the users. For the evaluation of our system, we consider not only the participants behavior but also the systems log-files as dependent variables (as it was suggested in [15] for the improvement of HRI design). We further applied Sequential Analysis as a qualitative method that provides micro-analytical insights into the sequential structure of the interaction. This way, we are able to investigate a closer interrelationship between robots and tutors actions and how they respond to each other. We focus on two cases: In the first case, the system module was reacting to the interaction partner appropriately; in the second case, the contingency module failed to spot the tutor. We found that the contingency module enables the robot to engage in an interaction with the human tutor who orients to the robots conduct as appropriate and responsive. In contrast, when the robot did not engage in an appropriate responsive interaction, the tutor oriented more towards the object while gazing less at the robot.


Topics in Cognitive Science | 2014

Co‐development of Manner and Path Concepts in Language, Action, and Eye‐Gaze Behavior

Katrin Solveig Lohan; Sascha S. Griffiths; Alessandra Sciutti; Tim C. Partmann; Katharina J. Rohlfing

In order for artificial intelligent systems to interact naturally with human users, they need to be able to learn from human instructions when actions should be imitated. Human tutoring will typically consist of action demonstrations accompanied by speech. In the following, the characteristics of human tutoring during action demonstration will be examined. A special focus will be put on the distinction between two kinds of motion events: path-oriented actions and manner-oriented actions. Such a distinction is inspired by the literature pertaining to cognitive linguistics, which indicates that the human conceptual system can distinguish these two distinct types of motion. These two kinds of actions are described in language by more path-oriented or more manner-oriented utterances. In path-oriented utterances, the source, trajectory, or goal is emphasized, whereas in manner-oriented utterances the medium, velocity, or means of motion are highlighted. We examined a video corpus of adult-child interactions comprised of three age groups of children-pre-lexical, early lexical, and lexical-and two different tasks, one emphasizing manner more strongly and one emphasizing path more strongly. We analyzed the language and motion of the caregiver and the gazing behavior of the child to highlight the differences between the tutoring and the acquisition of the manner and path concepts. The results suggest that age is an important factor in the development of these action categories. The analysis of this corpus has also been exploited to develop an intelligent robotic behavior-the tutoring spotter system-able to emulate childrens behaviors in a tutoring situation, with the aim of evoking in human subjects a natural and effective behavior in teaching to a robot. The findings related to the development of manner and path concepts have been used to implement new effective feedback strategies in the tutoring spotter system, which should provide improvements in human-robot interaction.


human-robot interaction | 2014

HRI: a bridge between robotics and neuroscience

Alessandra Sciutti; Katrin Solveig Lohan; Yukie Nagai

A fundamental challenge for robotics is to transfer the human natural social skills to the interaction with a robot. At the same time, neuroscience and psychology are still investigating the mechanisms behind the development of human-human interaction. HRI becomes therefore an ideal contact point for these different disciplines, as the robot can join these two research streams by serving different roles. From a robotics perspective, the study of interaction is used to implement cognitive architectures and develop cognitive models, which can then be tested in real world environments. From a neuroscientific perspective, robots could represent an ideal stimulus to establish an interaction with human partners in a controlled manner and make it possible studying quantitatively the behavioral and neural underpinnings of both cognitive and physical interaction. Ideally, the integration of these two approaches could lead to a positive loop: the implementation of new cognitive architectures may raise new interesting questions for neuroscientists, and the behavioral and neuroscientific results of the human-robot interaction studies could validate or give new inputs for robotics engineers. However, the integration of two different disciplines is always difficult, as often even similar goals are masked by difference in language or methodologies across fields. The aim of this workshop will be to provide a venue for researchers of different disciplines to discuss and present the possible point of contacts, to address the issues and highlight the advantages of bridging the two disciplines in the context of the study of interaction.


Topics in Cognitive Science | 2014

The ITALK project : A developmental robotics approach to the study of individual, social, and linguistic learning

Frank Broz; Chrystopher L. Nehaniv; Tony Belpaeme; Ambra Bisio; Kerstin Dautenhahn; Luciano Fadiga; Tomassino Ferrauto; Kerstin Fischer; Frank Förster; Onofrio Gigliotta; Sascha S. Griffiths; Hagen Lehmann; Katrin Solveig Lohan; Caroline Lyon; Davide Marocco; Gianluca Massera; Giorgio Metta; Vishwanathan Mohan; Anthony F. Morse; Stefano Nolfi; Francesco Nori; Martin Peniak; Karola Pitsch; Katharina J. Rohlfing; Gerhard Sagerer; Yo Sato; Joe Saunders; Lars Schillingmann; Alessandra Sciutti; Vadim Tikhanoff

This article presents results from a multidisciplinary research project on the integration and transfer of language knowledge into robots as an empirical paradigm for the study of language development in both humans and humanoid robots. Within the framework of human linguistic and cognitive development, we focus on how three central types of learning interact and co-develop: individual learning about ones own embodiment and the environment, social learning (learning from others), and learning of linguistic capability. Our primary concern is how these capabilities can scaffold each others development in a continuous feedback cycle as their interactions yield increasingly sophisticated competencies in the agents capacity to interact with others and manipulate its world. Experimental results are summarized in relation to milestones in human linguistic and cognitive development and show that the mutual scaffolding of social learning, individual learning, and linguistic capabilities creates the context, conditions, and requisites for learning in each domain. Challenges and insights identified as a result of this research program are discussed with regard to possible and actual contributions to cognitive science and language ontogeny. In conclusion, directions for future work are suggested that continue to develop this approach toward an integrated framework for understanding these mutually scaffolding processes as a basis for language development in humans and robots.


international conference on development and learning | 2012

Contingency scaffolds language learning

Katrin Solveig Lohan; Katharina J. Rohlfing; Joe Saunders; Chrystopher L. Nehaniv; Britta Wrede

In human robot interaction the question how to communicate is an important one. The answer to this question can be approached through several perspectives. One approach to study the best way how a robot should behave in an interaction with a human is by providing a consistent robotic behavior. From this we can gain insights into what parameters are triggering what responsive behavior in an user. This method allows us as roboticists to investigate how we can elicit a specific behavior in users in order to facilitate robots learning. In previous studies, we have shown how responsive eye gaze and feedback on a looming detection is modifying the human tutoring behavior [1]. In this paper, we present a study was carried out within the ITALK project. The study is targeting, how we can tune robotic feedback strategies of the iCub robot to evoke a tutoring behavior in a human tutor that is supporting a language acquisition system. We used a longitudinal approach for the study to also verify the verbal feedback given by the robot.


Frontiers in Robotics and AI | 2016

Language Meddles with Infants’ Processing of Observed Actions

Alessandra Sciutti; Katrin Solveig Lohan; Gustaf Gredebäck; Benjamin Koch; Katharina J. Rohlfing

When learning from actions, language can be a crucial source to specify the learning content. Understanding its interactions with action processing is therefore fundamental when attempting to model the development of human learning to replicate it in artificial agents. From early childhood two different processes participate in shaping infants’ understanding of the events occurring around them: Infants’ motor system influences their action perception, driving their attention to the action goal; additionally, parental language influences the way children parse what they observe into relevant units. To date, however, it has barely been investigated whether these two cognitive processes – action understanding and language – are separate and independent or whether language might interfere with the former. To address this question we evaluated whether a verbal narrative concurrent with action observation could avert 14-month-old infants’ attention from an agent’s action goal, which is otherwise naturally selected when the action is performed by an agent. The infants observed movies of an actor reaching and transporting balls into a box. In three between-subject conditions, the reaching movement was accompanied either with no audio (Base condition), a sine-wave sound (Sound condition), or a speech sample (Speech condition). The results show that the presence of a speech sample underlining the movement phase reduced significantly the number of predictive gaze shifts to the goal compared to the other conditions. Our findings thus indicate that any modelling of the interaction between language and action processing will have to consider a potential top-down effect of the former, as language can be a meddler in the predictive behavior typical of the observation of goal oriented actions.

Collaboration


Dive into the Katrin Solveig Lohan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kerstin Fischer

University of Southern Denmark

View shared research outputs
Top Co-Authors

Avatar

Joe Saunders

University of Hertfordshire

View shared research outputs
Top Co-Authors

Avatar

Alessandra Sciutti

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Karola Pitsch

University of Duisburg-Essen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hagen Lehmann

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Ruth Aylett

Heriot-Watt University

View shared research outputs
Researchain Logo
Decentralizing Knowledge