Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yutaka Takase is active.

Publication


Featured researches published by Yutaka Takase.


international conference on multimodal interfaces | 2016

Estimating communication skills using dialogue acts and nonverbal features in multiple discussion datasets

Shogo Okada; Yoshihiko Ohtake; Yukiko I. Nakano; Yuki Hayashi; Hung-Hsuan Huang; Yutaka Takase; Katsumi Nitta

This paper focuses on the computational analysis of the individual communication skills of participants in a group. The computational analysis was conducted using three novel aspects to tackle the problem. First, we extracted features from dialogue (dialog) act labels capturing how each participant communicates with the others. Second, the communication skills of each participant were assessed by 21 external raters with experience in human resource management to obtain reliable skill scores for each of the participants. Third, we used the MATRICS corpus, which includes three types of group discussion datasets to analyze the influence of situational variability regarding to the discussion types. We developed a regression model to infer the score for communication skill using multimodal features including linguistic and nonverbal features: prosodic, speaking turn, and head activity. The experimental results show that the multimodal fusing model with feature selection achieved the best accuracy, 0.74 in R2 of the communication skill. A feature analysis of the models revealed the task-dependent and task-independent features to contribute to the prediction performance.


Ksii Transactions on Internet and Information Systems | 2016

Generating Robot Gaze on the Basis of Participation Roles and Dominance Estimation in Multiparty Interaction

Yukiko I. Nakano; Takashi Yoshino; Misato Yatsushiro; Yutaka Takase

Gaze is an important nonverbal feedback signal in multiparty face-to-face conversations. It is well known that gaze behaviors differ depending on participation role: speaker, addressee, or side participant. In this study, we focus on dominance as another factor that affects gaze. First, we conducted an empirical study and analyzed its results that showed how gaze behaviors are affected by both dominance and participation roles. Then, using speech and gaze information that was statistically significant for distinguishing the more dominant and less dominant person in an empirical study, we established a regression-based model for estimating conversational dominance. On the basis of the model, we implemented a dominance estimation mechanism that processes online speech and head direction data. Then we applied our findings to human-robot interaction. To design robot gaze behaviors, we analyzed gaze transitions with respect to participation roles and dominance and implemented gaze-transition models as robot gaze behavior generation rules. Finally, we evaluated a humanoid robot that has dominance estimation functionality and determines its gaze based on the gaze models, and we found that dominant participants had a better impression of less dominant robot gaze behaviors. This suggests that a robot using our gaze models was preferred to a robot that was simply looking at the speaker. We have demonstrated the importance of considering dominance in human-robot multiparty interaction.


international conference on multimodal interfaces | 2016

Meeting extracts for discussion summarization based on multimodal nonverbal information

Fumio Nihei; Yukiko I. Nakano; Yutaka Takase

Group discussions are used for various purposes, such as creating new ideas and making a group decision. It is desirable to archive the results and processes of the discussion as useful resources for the group. Therefore, a key technology would be a way to extract meaningful resources from a group discussion. To accomplish this goal, we propose classification models that select meeting extracts to be included in the discussion summary based on nonverbal behavior such as attention, head motion, prosodic features, and co-occurrence patterns of these behaviors. We create different prediction models depending on the degree of extract-worthiness, which is assessed by the agreement ratio among human judgments. Our best model achieves 0.707 in F-measure and 0.75 in recall rate, and can compress a discussion into 45% of its original duration. The proposed models reveal that nonverbal information is indispensable for selecting meeting extracts of a group discussion. One of the future directions is to implement the models as an automatic meeting summarization system.


international conference on social computing | 2017

Toward a Supporting System of Communication Skill: The Influence of Functional Roles of Participants in Group Discussion

Qi Zhang; Hung-Hsuan Huang; Seiya Kimura; Shogo Okada; Yuki Hayashi; Yutaka Takase; Yukiko I. Nakano; Naoki Ohta; Kazuhiro Kuwabara

More and more companies are putting emphasis on communication skill in the recruitment of their employees and are adopting group discussion as part of recruitment interview. In our project, we aim to develop a system that can provide advices to its users in improving the impression of their communication skill during group discussion. In this paper, we focus on the functional roles of the participants in group discussion and report the results of the analysis of the relationship between communication skill impression and functional roles. This work is based on a group discussion corpus of 40 participants. The participants’ communication skill of the corpus was evaluated by 21 external experts who had experience of recruitment. In addition, seven functional roles: Follower, Gatekeeper, Information giver, Objector, Opinion provider, Passive participant, and Summarizer were defined and annotated. Furthermore, we analyzed the conversational situations of corpus and the difference of between participants with high-score and low-score communication skill in these situations.


international conference on multimodal interfaces | 2017

Predicting meeting extracts in group discussions using multimodal convolutional neural networks

Fumio Nihei; Yukiko I. Nakano; Yutaka Takase

This study proposes the use of multimodal fusion models employing Convolutional Neural Networks (CNNs) to extract meeting minutes from group discussion corpus. First, unimodal models are created using raw behavioral data such as speech, head motion, and face tracking. These models are then integrated into a fusion model that works as a classifier. The main advantage of this work is that the proposed models were trained without any hand-crafted features, and they outperformed a baseline model that was trained using hand-crafted features. It was also found that multimodal fusion is useful in applying the CNN approach to model multimodal multiparty interaction.


ieee global conference on consumer electronics | 2016

Development environment of a spoken dialogue system based on PRINTEPS

Ryota Nishimura; Yutaka Takase; Yukiko I. Nakano

In this paper, we describe the development of a spoken dialogue system based on PRINTEPS architecture. This spoken dialogue system is composed of five modules (speech recognition, language understanding, dialogue management, response generation, speech synthesis). In PRINTEPS, when calling the spoken dialogue system, system developers specify the small-scale dialogue goal. The system performs a dialogue with the user in order to obtain necessary information from the user. Dialogue processing rules corresponding to the dialogue goal is prepared in advance. The advantage of PRINTEPS-based system development is that system developers can build a spoken dialogue system without the knowledge of spoken dialogue system.


international conference on multimodal interfaces | 2015

Predicting Participation Styles using Co-occurrence Patterns of Nonverbal Behaviors in Collaborative Learning

Yukiko I. Nakano; Sakiko Nihonyanagi; Yutaka Takase; Yuki Hayashi; Shogo Okada

With the goal of assessing participant attitudes and group activities in collaborative learning, this study presents models of participation styles based on co-occurrence patterns of nonverbal behaviors between conversational participants. First, we collected conversations among groups of three people in a collaborative learning situation, wherein each participant had a digital pen and wore a glasses-type eye tracker. We then divided the collected multimodal data into 0.1-second intervals. The discretized data were applied to an unsupervised method to find co-occurrence behavioral patterns. As a result, we discovered 122 multimodal behavioral motifs from more than 3,000 possible combinations of behaviors by three participants. Using the multimodal behavioral motifs as predictor variables, we created regression models for assessing participation styles. The multiple correlation coefficients ranged from 0.74 to 0.84, indicating a good fit between the models and the data. A correlation analysis also enabled us to identify a smaller set of behavioral motifs (fewer than 30) that are statistically significant as predictors of participation styles. These results show that automatically discovered combinations of multiple kinds of nonverbal information with high co-occurrence frequencies observed between multiple participants as well as for a single participant are useful in characterizing the participants attitudes towards the conversation.


international conference on learning and collaboration technologies | 2015

Generating Quizzes for History Learning Based on Wikipedia Articles

Yoshihiro Tamura; Yutaka Takase; Yuki Hayashi; Yukiko I. Nakano

In intelligent tutoring systems (ITS), creating large amounts of educational content requires a large-scale and multi-domain knowledge base. However, most knowledge bases for ITSs are still manually developed. Aiming at reducing the cost of developing educational contents, this study proposes a method to generate multiple-choice history quizzes using Wikipedia articles. We also propose a method for assigning an importance measure to each relevant article based on hierarchical categories and the number of incoming links to the article. This is indispensable in generating quizzes that test basic knowledge of history. Finally, the results of evaluating these methods show that the proposed methods are useful in automatically creating quizzes for history exercise.


human robot interaction | 2015

Controlling Robot's Gaze according to Participation Roles and Dominance in Multiparty Conversations

Takashi Yoshino; Yutaka Takase; Yukiko I. Nakano

A robots gaze behaviors are indispensable in allowing the robot to participate in multiparty conversations. To build a robot that can convey appropriate attentional behavior in multiparty human- robot conversations, this study proposes robot head gaze models in terms of participation roles and dominance in a conversation. By implementing such models, we developed a robot that can determine appropriate gaze behaviors according to its conversational roles and dominance.


human robot interaction | 2016

Assessing the Communication Attitude of the Elderly using Prosodic Information and Head Motions

Toshiki Yamanaka; Yutaka Takase; Yukiko I. Nakano

In order to provide a watching service for the elderly with dementia, recognizing and assessing their cognitive and health status are indispensable. In this study, we propose a prediction model for assessing the communication attitude of the elderly during interacting with a virtual agent. We define speech features and head motion features using frequency analysis and apply them to a linear regression analysis. The coefficient of determination for the model using only speech features was 0.413 and that for a model that exploited both speech and head movement features was 0.505. This result suggests that combining speech and head movement data is useful in predicting the communication attitude of the elderly, and the model is can be applied for automatic assessment in watching services.

Collaboration


Dive into the Yutaka Takase's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yuki Hayashi

Osaka Prefecture University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shogo Okada

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Katsumi Nitta

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge