Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yukiko I. Nakano is active.

Publication


Featured researches published by Yukiko I. Nakano.


meeting of the association for computational linguistics | 2003

Towards a Model of Face-to-Face Grounding

Yukiko I. Nakano; Gabe Reinstein; Tom Stocky; Justine Cassell

We investigate the verbal and nonverbal means for grounding, and propose a design for embodied conversational agents that relies on both kinds of signals to establish common ground in human-computer interaction. We analyzed eye gaze, head nods and attentional focus in the context of a direction-giving task. The distribution of nonverbal behaviors differed depending on the type of dialogue move being grounded, and the overall pattern reflected a monitoring of lack of negative feedback. Based on these results, we present an ECA that uses verbal and nonverbal grounding acts to update dialogue state.


meeting of the association for computational linguistics | 2001

Non-Verbal Cues for Discourse Structure

Justine Cassell; Yukiko I. Nakano; Timothy W. Bickmore; Candace L. Sidner; Charles Rich

This paper addresses the issue of designing embodied conversational agents that exhibit appropriate posture shifts during dialogues with human users. Previous research has noted the importance of hand gestures, eye gaze and head nods in conversations between embodied agents and humans. We present an analysis of human monologues and dialogues that suggests that postural shifts can be predicted as a function of discourse state in monologues, and discourse and conversation state in dialogues. On the basis of these findings, we have implemented an embodied conversational agent that uses Collagen in such a way as to generate postural shifts.


intelligent user interfaces | 2010

Estimating user's engagement from eye-gaze behaviors in human-agent conversations

Yukiko I. Nakano; Ryo Ishii

In face-to-face conversations, speakers are continuously checking whether the listener is engaged in the conversation and change the conversational strategy if the listener is not fully engaged in the conversation. With the goal of building a conversational agent that can adaptively control conversations with the user, this study analyzes the users gaze behaviors and proposes a method for estimating whether the user is engaged in the conversation based on gaze transition 3-gram patterns. First, we conduct a Wizard-of-Oz experiment to collect the users gaze behaviors. Based on the analysis of the gaze data, we propose an engagement estimation method that detects the users disengagement gaze patterns. The algorithm is implemented as a real-time engagement-judgment mechanism and is incorporated into a multimodal dialogue manager in a conversational agent. The agent estimates the users conversational engagement and generates probing questions when the user is distracted from the conversation. Finally, we conduct an evaluation experiment using the proposed engagement-sensitive agent and demonstrate that the engagement estimation function improves the users impression of the agent and the interaction with the agent. In addition, probing performed with proper timing was also found to have a positive effect on users verbal/nonverbal behaviors in communication with the conversational agent.


Ai & Society | 2009

From observation to simulation: generating culture-specific behavior for interactive systems

Matthias Rehm; Yukiko I. Nakano; Elisabeth André; Toyoaki Nishida; Nikolaus Bee; Birgit Endrass; Michael Wissner; Afia Akhter Lipi; Hung-Hsuan Huang

In this article we present a parameterized model for generating multimodal behavior based on cultural heuristics. To this end, a multimodal corpus analysis of human interactions in two cultures serves as the empirical basis for the modeling endeavor. Integrating the results from this empirical study with a well-established theory of cultural dimensions, it becomes feasible to generate culture-specific multimodal behavior in embodied agents by giving evidence for the cultural background of the agent. Two sample applications are presented that make use of the model and are designed to be applied in the area of coaching intercultural communication.


intelligent virtual agents | 2008

Culture-Specific First Meeting Encounters between Virtual Agents

Matthias Rehm; Yukiko I. Nakano; Elisabeth André; Toyoaki Nishida

We present our concept of integrating culture as a computational parameter for modeling multimodal interactions with virtual agents. As culture is a social rather than a psychological notion, its influence is evident in interactions, where cultural patterns of behavior and interpretations mismatch. Nevertheless, taking culture seriously its influence penetrates most layers of agent behavior planning and generation. In this article we concentrate on a first meeting scenario, present our model of an interactive agent system and identify, where cultural parameters play a role. To assess the viability of our approach, we outline an evaluation study that is set up at the moment.


human-robot interaction | 2012

Listener agent for elderly people with dementia

Yoichi Sakai; Yuuko Nonaka; Kiyoshi Yasuda; Yukiko I. Nakano

With the goal of developing a conversational humanoid that can serve as a companion for people with dementia, we propose an autonomous virtual agent that can generate backchannel feedback, such as head nods and verbal acknowledgement, on the basis of acoustic information in the users speech. The system is also capable of speech recognition and language understanding functionalities, which are potentially useful for evaluating the cognitive status of elderly people on a daily basis.


north american chapter of the association for computational linguistics | 2004

Converting text into agent animations: assigning gestures to text

Yukiko I. Nakano; Masashi Okamoto; Daisuke Kawahara; Qing Li; Toyoaki Nishida

This paper proposes a method for assigning gestures to text based on lexical and syntactic information. First, our empirical study identified lexical and syntactic information strongly correlated with gesture occurrence and suggested that syntactic structure is more useful for judging gesture occurrence than local syntactic cues. Based on the empirical results, we have implemented a system that converts text into an animated agent that gestures and speaks synchronously.


intelligent virtual agents | 2008

Estimating User's Conversational Engagement Based on Gaze Behaviors

Ryo Ishii; Yukiko I. Nakano

In face-to-face conversations, speakers are continuously checking whether the listener is engaged in the conversation. When the listener is not fully engaged in the conversation, the speaker changes the conversational contents or strategies. With the goal of building a conversational agent that can control conversations with the user in such an adaptive way, this study analyzes the users gaze behaviors and proposes a method for predicting whether the user is engaged in the conversation based on gaze transition 3-Gram patterns. First, we conducted a Wizard-of-Oz experiment to collect the users gaze behaviors as well as the users subjective reports and an observers judgment concerning the users interest in the conversation. Next, we proposed an engagement estimation algorithm that estimates the users degree of engagement from gaze transition patterns. This method takes account of individual differences in gaze patterns. The algorithm is implemented as a real-time engagement-judgment mechanism, and the results of our evaluation experiment showed that our method can predict the users conversational engagement quite well.


Multimodal corpora | 2009

Creating standardized video recordings of multimodal interactions across cultures

Matthias Rehm; Elisabeth André; Nikolaus Bee; Birgit Endrass; Michael Wissner; Yukiko I. Nakano; Afia Akhter Lipi; Toyoaki Nishida; Hung-Hsuan Huang

Trying to adapt the behavior of an interactive system to the cultural background of the user requires information on how relevant behaviors differ as a function of the users cultural background. To gain such insights in the interrelation of culture and behavior patterns, the information from the literature is often too anecdotal to serve as the basis for modeling a systems behavior, making it necessary to collect multimodal corpora in a standardized fashion in different cultures. In this chapter, the challenges of such an endeavor are introduced and solutions are presented by examples from a German-Japanese project that aims at modeling culture-specific behaviors for Embodied Conversational Agents.


intelligent virtual agents | 2006

Avatar’s gaze control to facilitate conversational turn-taking in virtual-space multi-user voice chat system

Ryo Ishii; Toshimitsu Miyajima; Kinya Fujita; Yukiko I. Nakano

Aiming at facilitating multi-party conversations in a shared-virtual-space voice chat environment, we propose an avatar’s gaze behavior model for turn-taking in multi-party conversations, and a shared-virtual-space voice chat system with automatic avatar gaze direction control function using user utterance information. The use of the utterance information attained easy-to-use automatic gaze control without eye-tracking camera or manual operation. In our gaze behavior model, a conversation was divided into three states: during-utterance, right-after-utterance, and silence. For each state, avatar’s gaze behaviors are controlled based on a probabilistic state transition model. n nPrevious studies reveled that gaze has a power of selecting the next speaker and urge her/him to speak, and continuous gaze has a risk of giving intimidating impression to the listener. Although explicit look-away from the conversational partner generally means interest to others, such gaze behaviors seem to help the speaker avoid threatening the listener’s face. In order to express less-face-threatening eye-gaze in virtual space avatars, our model introduces vague-gaze: the avatar looks at five degrees lower than the user’s eye position. Thus, in during-utterance state, the avatars were controlled using a probabilistic state transition model that transits among three states: eye contact, vague-gaze and look-away. It is expected that the vague-gaze reduces intimidating impression as well as facilitates conversational turn-taking. In right-after-utterance state, the speaker avatar keeps an eye contact for a few seconds to urge the next speaker to start a new turn. This is based on an observation of real face-to-face conversation. Finally, in silent state, avatar’s gaze direction is randomly changed to avoid giving intimidating impression. n nIn our evaluation experiment, twelve subjects were divided into four groups, and requested to chat with the avatars and answer impressions for them using Likert scale. As for the during-utterance state, in terms of naturalness, intimidating impression reduction and turn-taking facilitation, a transition model consisting of vague-gaze and look-away was significantly effective, compared to the vague-gaze alone, the look-away alone and the fixed-gaze alone models . In the right-after-utterance state, any of the gaze control methods were significantly effective in facilitating turn-taking, compared to the fixed-gaze method. The evaluation experiment demonstrated the effectiveness of our avatar’s gaze control mechanism, and suggested that the gaze control based on the user utterance facilitates multi-party conversations in a virtual-space voice chat system.

Collaboration


Dive into the Yukiko I. Nakano's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ryo Ishii

Nippon Telegraph and Telephone

View shared research outputs
Researchain Logo
Decentralizing Knowledge