Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yoichi Matsuyama is active.

Publication


Featured researches published by Yoichi Matsuyama.


Computer Speech & Language | 2015

Four-participant group conversation: A facilitation robot controlling engagement density as the fourth participant

Yoichi Matsuyama; Iwao Akiba; Shinya Fujie; Tetsunori Kobayashi

Abstract In this paper, we present a framework for facilitation robots that regulate imbalanced engagement density in a four-participant conversation as the forth participant with proper procedures for obtaining initiatives. Four is the special number in multiparty conversations. In three-participant conversations, the minimum unit for multiparty conversations, social imbalance, in which a participant is left behind in the current conversation, sometimes occurs. In such scenarios, a conversational robot has the potential to objectively observe and control situations as the fourth participant. Consequently, we present model procedures for obtaining conversational initiatives in incremental steps to harmonize such four-participant conversations. During the procedures, a facilitator must be aware of both the presence of dominant participants leading the current conversation and the status of any participant that is left behind. We model and optimize these situations and procedures as a partially observable Markov decision process (POMDP), which is suitable for real-world sequential decision processes. The results of experiments conducted to evaluate the proposed procedures show evidence of their acceptability and feeling of groupness.


annual meeting of the special interest group on discourse and dialogue | 2016

Socially-Aware Animated Intelligent Personal Assistant Agent

Yoichi Matsuyama; Arjun Bhardwaj; Ran Zhao; Oscar Romeo; Sushma Akoju; Justine Cassell

SARA (Socially-Aware Robot Assistant) is an embodied intelligent personal assistant that analyses the user’s visual (head and face movement), vocal (acoustic features) and verbal (conversational strategies) behaviours to estimate its rapport level with the user, and uses its own appropriate visual, vocal and verbal behaviors to achieve task and social goals. The presented agent aids conference attendees by eliciting their preferences through building rapport, and then making informed personalized recommendations about sessions to attend and people to meet.


ieee-ras international conference on humanoid robots | 2008

Designing communication activation system in group communication

Yoichi Matsuyama; Hikaru Taniyama; Shinya Fujie; Tetsunori Kobayashi

Our community is facing serious problem: aging society. The population of elderly people is now increasing in Japan. Especially over 75 years old people is estimated to be up to 20 million in 2030. We have investigated in one day care centers which are facilities for elderly care. And we realized that communication is needed for its own sake in these facilities and active communication can cure even depression and dementia. Therefore we propose to cope with these problems using a robot as a communication activator in order to improve the effectiveness of group communication. We define group communication as one of the type of communication which is formed by several persons. This time, we focus on a recreation game named ldquoNandoku.rdquo Nandoku is a type of quize which can be described as group communication with a master of ceremony (MC). In this paper, we describe requirement for this system and system design. The system always selects its behavior and target (a participant in the game) to maximize ldquocommunication activeness.rdquo Communication activeness is defined as amount of several subjectspsila(ordinary three: A, B, C) participation, which are calculated with participantspsila face direction using camera information. For instance, if participant A is not fully participating by not making eye contact, the system is expected to select one of the behaviors such as ldquoCan you answer, Mr.A?rdquo to encourage A to participate in the game. We experimented with the system in a daycare center. Our results show subjectspsila participation is totally increased. That offers evidence that the robot can serve a practical role in improving the group communication as a communication activator.


international conference on computer graphics and interactive techniques | 2009

SCHEMA: multi-party interaction-oriented humanoid robot

Yoichi Matsuyama; Kosuke Hosoya; Hikaru Taniyama; Hiroki Tsuboi; Shinya Fujie; Tetsunori Kobayashi

Most of our daily communication occurs in groups, at school, at home, and at work, so this project proposes a robot that can participate in routine human conversations.


ieee-ras international conference on humanoid robots | 2008

Multi-modal integration for personalized conversation: Towards a humanoid in daily life

Shinya Fujie; Daichi Watanabe; Yuhi Ichikawa; Hikaru Taniyama; Kosuke Hosoya; Yoichi Matsuyama; Tetsunori Kobayashi

Humanoid with spoken language communication ability is proposed and developed. To make humanoid live with people, spoken language communication is fundamental because we use this kind of communication every day. However, due to difficulties of speech recognition itself and implementation on the robot, a robot with such an ability has not been developed. In this study, we propose a robot with the technique implemented to overcome these problems. This proposed system includes three key features, image processing, sound source separation, and turn-taking timing control. Processing image captured with camera mounted on the robotpsilas eyes enables to find and identify whom the robot should talked to. Sound source separation enables distant speech recognition, so that people need no special device, such as head-set microphones. Turn-taking timing control is often lacked in many conventional spoken dialogue system, but this is fundamental because the conversation proceeds in real-time. The effectiveness of these elements as well as the example of conversation are shown in experiments.


Archive | 2011

Multiparty Conversation Facilitation Strategy Using Combination of Question Answering and Spontaneous Utterances

Yoichi Matsuyama; Yushi Xu; Akihiro Saito; Shinya Fujie; Tetsunori Kobayashi

Based on our analysis of entertaining multiparty conversations, participants tend to obey the cooperative conversational principle, and provide feedback so as to understand the current topic. In addition, they also speak to express their own idea and interest. In this paper, we describe a conversational robot system designed for constantly facilitating conversations with topic tracing capability, which is able to output a combination of answers to questions and spontaneous utterances.


national conference on artificial intelligence | 2010

Framework of Communication Activation Robot Participating in Multiparty Conversation

Yoichi Matsuyama; Hikaru Taniyama; Shinya Fujie; Tetsunori Kobayashi


conference of the international speech communication association | 2009

Conversation robot participating in and activating a group communication

Shinya Fujie; Yoichi Matsuyama; Hikaru Taniyama; Tetsunori Kobayashi


conference of the international speech communication association | 2010

Psychological evaluation of a group communication activativation robot in a party game

Yoichi Matsuyama; Shinya Fujie; Hikaru Taniyama; Tetsunori Kobayashi


human-robot interaction | 2009

System design of group communication activator: an entertainment task for elderly care

Yoichi Matsuyama; Hikaru Taniyama; Shinya Fujie; Tetsunori Kobayashi

Collaboration


Dive into the Yoichi Matsuyama's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Justine Cassell

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alexandros Papangelis

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ran Zhao

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge