Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yukie Nagai is active.

Publication


Featured researches published by Yukie Nagai.


IEEE Transactions on Autonomous Mental Development | 2009

Computational Analysis of Motionese Toward Scaffolding Robot Action Learning

Yukie Nagai; Katharina J. Rohlfing

A difficulty in robot action learning is that robots do not know where to attend when observing action demonstration. Inspired by human parent-infant interaction, we suggest that parental action demonstration to infants, called motionese, can scaffold robot learning as well as infants. Since infants knowledge about the context is limited, which is comparable to robots, parents are supposed to properly guide their attention by emphasizing the important aspects of the action. Our analysis employing a bottom-up attention model revealed that motionese has the effects of highlighting the initial and final states of the action, indicating significant state changes in it, and underlining the properties of objects used in the action. Suppression and addition of parents body movement and their frequent social signals to infants produced these effects. Our findings are discussed toward designing robots that can take advantage of parental teaching.


international conference on development and learning | 2009

People modify their tutoring behavior in robot-directed interaction for action learning

Anna-Lisa Vollmer; Katrin Solveig Lohan; Kerstin Fischer; Yukie Nagai; Karola Pitsch; Jannik Fritsch; Katharina J. Rohlfing; Britta Wredek

In developmental research, tutoring behavior has been identified as scaffolding infants learning processes. It has been defined in terms of child-directed speech (Motherese), child-directed motion (Motionese), and contingency. In the field of developmental robotics, research often assumes that in human-robot interaction (HRI), robots are treated similar to infants, because their immature cognitive capabilities benefit from this behavior. However, according to our knowledge, it has barely been studied whether this is true and how exactly humans alter their behavior towards a robotic interaction partner. In this paper, we present results concerning the acceptance of a robotic agent in a social learning scenario obtained via comparison to adults and 8-11 months old infants in equal conditions. These results constitute an important empirical basis for making use of tutoring behavior in social robotics. In our study, we performed a detailed multimodal analysis of HRI in a tutoring situation using the example of a robot simulation equipped with a bottom-up saliency-based attention model [1]. Our results reveal significant differences in hand movement velocity, motion pauses, range of motion, and eye gaze suggesting that for example adults decrease their hand movement velocity in an Adult-Child Interaction (ACI), opposed to an Adult-Adult Interaction (AAI) and this decrease is even higher in the Adult-Robot Interaction (ARI). We also found important differences between ACI and ARI in how the behavior is modified over time as the interaction unfolds. These findings indicate the necessity of integrating top-down feedback structures into a bottom-up system for robots to be fully accepted as interaction partners.


international conference on development and learning | 2009

From bottom-Up visual attention to robot action learning

Yukie Nagai

This research addresses the challenge of developing an action learning model employing bottom-up visual attention. Although bottom-up attention enables robots to autonomously explore the environment, learn to recognize objects, and interact with humans, the instability of their attention as well as the poor quality of the information detected at the attentional location has hindered the robots from processing dynamic movements. In order to learn actions, robots have to stably attend to the relevant movement by ignoring noises while maintaining sensitivity to a new important movement. To meet these contradictory requirements, I introduce mechanisms for retinal filtering and stochastic attention selection inspired by human vision. The former reduces the complexity of the peripheral vision and thus enables robots to focus more on the currently-attended location. The latter allows robots to flexibly shift their attention to a new prominent location, which must be relevant to the demonstrated action. The signals detected at the attentional location are then enriched based on the spatial and temporal continuity so that robots can learn to recognize objects, movements, and their associations. Experimental results show that the proposed system can extract key actions from human action demonstrations.


international conference on robotics and automation | 2008

Toward designing a robot that learns actions from parental demonstrations

Yukie Nagai; Claudia Muhl; Katharina J. Rohlfing

How to teach actions to a robot as well as how a robot learns actions is an important issue to be discussed in designing robot learning systems. Inspired by human parent-infant interaction, we hypothesize that a robot equipped with infant-like abilities can take advantage of parental proper teaching. Parents are known to significantly alter their infant-directed actions versus adult-directed ones, e.g. make more pauses between movements, which is assumed to aid the infants understanding of the actions. As a first step, we analyzed parental actions using a primal attention model. The model based on visual saliency can detect likely important locations in a scene without employing any knowledge about the actions or the environment. Our statistical analysis revealed that the model was able to extract meaningful structures of the actions, e.g. the initial and final state of the actions and the significant state changes in them, which were highlighted by parental action modifications. We further discuss the issue of designing an infant-like robot that can induce parent-like teaching, and present a human-robot interaction experiment evaluating our robot simulation equipped with the saliency model.


robot and human interactive communication | 2007

Does Disturbance Discourage People from Communicating with a Robot

Claudia Muhl; Yukie Nagai

We suggest that peoples responses to a robot of which attention starts to be distracted show whether they accept the robot as an intentional communication partner or not. Human-robot interaction (HRI) as well as human-human interaction (HHI) is sometimes interrupted by disturbing factors. However, in HHI people continue to communicate with a partner because they presuppose that the partner may shift his/her interactive orientation based on his/her internal state. We designed a communication robot equipped with a mechanism of saliency-based visual attention and evaluated it in an observational experiment of HRI. Our sociological analysis of peoples responses to our robot showed that it was accepted as a proactive communication agent. When the robot shifted its attention to an irrelative target, the human partners, for example, followed the line of the robots gaze and tried to regain its attention by exaggerating their actions and increasing their communication channels as they would do toward a human partner. Based on these results, we conclude that disturbance can be an encouraging factor for human activity in HRI. The results are discussed from both a sociological and an engineering point of view.


KI '07 Proceedings of the 30th annual German conference on Advances in Artificial Intelligence | 2007

On Constructing a Communicative Space in HRI

Claudia Muhl; Yukie Nagai; Gerhard Sagerer

Interaction means to share a communicative space with others. Social interactions are reciprocally-oriented activities among currently present partners. An artificial system can be such a partner for humans. In this study, we investigate the effect of disturbance in human-robot interaction. Disturbance in communication is an attention shift of a partner caused by an external factor. In human-human interaction, people would cope with the problem to continue to communicate because they presuppose that the partner might get irritated and thereby shift his/her interactive orientation. Our hypothesis is that people reproduce a social attitude of reattracting the partners attention by varying their communication channels even toward a robot. We conducted an experiment of hybrid interaction between a human and a robot simulation and analyzed it from a sociological and an engineering perspective. Our qualitative analysis revealed that people established a communicative space with our robot and accepted it as a proactive agent.


international conference on development and learning | 2008

Parental action modification highlighting the goal versus the means

Yukie Nagai; Katharina J. Rohlfing

Parents significantly alter their infant-directed actions compared to adult-directed ones, which is assumed to assist the infantspsila processing of the actions. This paper discusses differences in parental action modification depending on whether the goal or the means is more crucial. When demonstrating a task to an infant, parents try to emphasize the important aspects of the task by suppressing or adding their movement. Our hypothesis is that in a goal-crucial task, the initial and final states of the task should be highlighted by parental actions, whereas in a means-crucial task the movement is underlined. Our analysis using a saliency-based attention model partially verified it: When focusing on the goal, parents tended to emphasize the initial and final states of the objects used in the task by taking a long pause before/after they started/fulfilled the task. When focusing on the means, parents shook the object to highlight it, which consequently made its state invisible. We discuss our findings regarding the uniqueness and commonality of the parental action modification. We also describe our contribution to the development of robots capable of imitating human actions.


intelligent robots and systems | 2009

Stability and sensitivity of bottom-up visual attention for dynamic scene analysis

Yukie Nagai

This paper presents an architecture extending bottom-up visual attention for dynamic scene analysis. In dynamic scenes, particularly when learning actions from demonstrations, robots have to stably focus on the relevant movement by disregarding surrounding noises, but still maintain sensitivity to a new relevant movement, which might occur in the surroundings. In order to meet the contradictory requirements of stability and sensitivity for attention, this paper introduces biologically-inspired mechanisms for retinal filtering and stochastic attention selection. The former reduces the complexity of peripheral signals by filtering an input image. It results in enhancing bottom-up saliency in the fovea as well as in detecting only prominent signals from the periphery. The latter allows robots to shift attention to a less but still salient location in the periphery, which is likely relevant to the demonstrated action. Integrating these mechanisms with computation for bottom-up saliency enables robots to extract important action sequences from task demonstrations. Experiments with a simulated and a natural scene show better performance of the proposed model than comparative models.


international conference on artificial neural networks | 2011

A perceptual memory system for affordance learning in humanoid robots

Marc Kammer; Marko Tscherepanow; Thomas Schack; Yukie Nagai

Memory constitutes an essential cognitive capability of humans and animals. It allows them to act in very complex, non-stationary environments. In this paper, we propose a perceptual memory system, which is intended to be applied on a humanoid robot learning affordances. According to the properties of biological memory systems, it has been designed in such a way as to enable life-long learning without catastrophic forgetting. Based on clustering sensory information, a symbolic representation is derived automatically. In contrast to alternative approaches, our memory system does not rely on pre-trained models and works completely unsupervised.


Frontiers in Computational Neuroscience | 2011

From Affordances to Situated Affordances in Robotics - Why Context is Important

Marc Kammer; Marko Tscherepanow; Thomas Schack; Yukie Nagai

Collaboration


Dive into the Yukie Nagai's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Karola Pitsch

University of Duisburg-Essen

View shared research outputs
Top Co-Authors

Avatar

Verena V. Hafner

Humboldt University of Berlin

View shared research outputs
Researchain Logo
Decentralizing Knowledge