Yuichi Koyama
Nagoya University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yuichi Koyama.
international conference on multimodal interfaces | 2007
Yuichi Sawamoto; Yuichi Koyama; Yasushi Hirano; Shoji Kajita; Kenji Mase; Kimiko Katsuyama; Kazunobu Yamauchi
We propose a method of extracting important interaction patterns in medical interviews. Because the interview is a major step where doctor-patient communication takes place, improving the skill and the quality of the medical interview will lead to a better medical care. A pattern mining method for multimodal interaction logs, such as gestures and speech, is applied to medical interviews in order to extract certain doctor-patient interactions. As a result, we demonstrated that several interesting patterns are extracted and we examined their interpretations. The extracted patterns are considered to be ones that doctors should acquire in training and practice for the medical interview.
Proceedings of the 5th ACM International Workshop on Context-Awareness for Self-Managing Systems | 2011
Hirotake Yomazoe; Yuichi Koyama; Tomoko Yonezawa; Shinji Abe; Kenji Mase
In this paper, we propose a method to estimate such user conversational states as concentrating/not concentrating. We previously proposed a robot-assisted videophone system to sustain conversations between elderly people. In such video-phone systems, the user conversational situation must be estimated so that the robot behaves appropriately. The proposed method employs i) elemental actions and a combination of user elemental actions as features for recognition and ii) the normalization of feature vectors based on the frequencies of actions. The experimental results show the effectiveness of our method.
International Journal of Health Care Quality Assurance | 2010
Kimiko Katsuyama; Yuichi Koyama; Yasushi Hirano; Kenji Mase; Ken Kato; Satoshi Mizuno; Kazunobu Yamauchi
PURPOSE Measurements of the quality of physician-patient communication are important in assessing patient outcomes, but the quality of communication is difficult to quantify. The aim of this paper is to develop a computer analysis system for the physician-patient consultation process (CASC), which will use a quantitative method to quantify and analyze communication exchanges between physicians and patients during the consultation process. DESIGN/METHODOLOGY/APPROACH CASC is based on the concept of narrative-based medicine using a computer-mediated communication (CMC) technique from a cognitive dialog processing system. Effective and ineffective consultation samples from the works of Saito and Kleinman were tested with CASC in order to establish the validity of CASC for use in clinical practice. After validity was confirmed, three researchers compared their assessments of consultation processes in a physicians office with CASCs. Consultations of 56 migraine patients were recorded with permission, and for this study consultations of 29 patients that included more than 50 words were used. FINDINGS Transcribed data from the 29 consultations input into CASC resulted in two diagrams of concept structure and concept space to assess the quality of consultation. The concordance rate between the assessments by CASC and the researchers was 75 percent. ORIGINALITY/VALUE In this study, a computer-based communication analysis system was established that efficiently quantifies the quality of the physician-patient consultation process. The system is promising as an effective tool for evaluating the quality of physician-patient communication in clinical and educational settings.
Proceedings of the ICMI-MLMI '09 Workshop on Multimodal Sensor-Based Systems and Mobile Phones for Social Computing | 2009
Kenji Mase; Yuichi Sawamoto; Yuichi Koyama; Tomio Suzuki; Kimiko Katsuyama
We propose a bottom-up analysis method of multi-modal dialogue interaction with a pattern and motif mining method to summarize such interviews as between doctors and patients for medical diagnosis. Our aim is to generate a hierarchical model of the interviewing behavior of such kinds as interaction corpora, consisting of primitive, pattern, motif, and pattern clusters from the given dialogue session data. We exploit a Jensen-Shannon Divergence measure to extract important patterns and motifs. Medical interview is chosen as an important application of such analysis because a doctors multi-modal interviewing technique is essential to establish a reliable relationship and to conclude with a successful diagnosis. An interaction corpus of example simulated medical interviews is constructed by the proposed method. The interviews are captured by a video camera and microphones. Based on the constructed indices in terms of given pattern notations and clusters, the interviews were summarized. Performance evaluation of the indices by a medical doctor was performed to confirm their plausibility and summary descriptions of the results.
ubiquitous computing | 2010
Yuichi Koyama; Yuichi Sawamoto; Yasushi Hirano; Shoji Kajita; Kenji Mase; Tomio Suzuki; Kimiko Katsuyama; Kazunobu Yamauchi
We propose a multi-modal dialogue analysis method for medical interviews that hierarchically interprets nonverbal interaction patterns in a bottom-up manner and simultaneously visualizes the topic structure. Our method aims to provide physicians with the clues generally overlooked by conventional dialogue analysis to form a cycle of dialogue practice and analysis. We introduce a motif and a pattern cluster in the designs of the hierarchical indices of interaction and exploit the Jensen–Shannon divergence (JSD) metric to reduce the number of usable indices. We applied the proposed interpretation method of interaction patterns to develop a corpus of interviews. The results of a summary reading experiment confirmed the validity of the developed indices. Finally, we discussed the integrated analysis of the topic structure and a nonverbal summary.
intelligent robots and systems | 2010
Tomoko Yonezawa; Yuichi Koyama; Hirotake Yamazoe; Shinji Abe; Kenji Mase
In this paper, we propose and evaluate a video communication system that compensates for users uncongenial attitudes by coordinating the robots behaviors and media control of the video. The system facilitates comfortable video communications between elderly or disabled people by an assistant robot for each user that expresses a) active listening behaviors to compensate for the listeners attitude when he/she is not really listening to another users talking and b) a cover-up behavior (gaze turned to the user) to divert attention from the other users uncongenial attitude when that person is not looking at the talking user but toward the robot at her/his side; this behavior is performed by coordinating the automatic switching of cameras to give the impression that the congenial person is still looking at the user. The results obtained in the system evaluation show the significant effectiveness of this design approach using the robots behavior and media control of the video to compensate for the problems in video communication that we aimed to overcome.
Proceedings of the 4th ACM International Workshop on Context-Awareness for Self-Managing Systems | 2010
Tomoko Yonezawa; Hirotake Yamazoe; Yuichi Koyama; Shinji Abe; Kenji Mase
This paper proposes a videophone conversation support system by the behaviors of a companion robot and the switching of camera images in coordination with the users conversational attitude toward the communication. In order to maintain a conversation and to achieve comfortable communication, it is necessary to understand a users conversational states, which are whether the user is talking (taking the initiative) and whether the user is concentrating on the conversation. First, a) the system estimates the users conversational state by a machine learning method. Next, b-1) the robot appropriately expresses its active listening behaviors, such as nodding and gaze turns, to compensate for the listeners attitude when she/he is not really listening to another users speech, b-2) the robot shows communication-evoking behaviors (topic provision) to compensate for the lack of a topic, and b-3) the system switches the camera images to create an illusion of eye-contact corresponding to the current context of the users attitude. From empirical studies, a detailed experiment, and a demonstration experiment, i) both the robots active listening behaviors and the switching of the camera image compensate for the other persons attitude, ii) the topic provision function is effective for awkward silences, and iii) elderly people prefer long intervals between the robots behaviors.
international conference on innovative computing, information and control | 2008
Yuichi Koyama; Yasushi Hirano; Shoji Kajita; Kenji Mase; Kimiko Katsuyama; Tomio Suzuki; Kazunobu Yamauchi
We propose a method that visualizes the topic structure of medical interviews to provide doctors with clues found in the complete narratives. We collected 15 simulated interviews in an education field and conducted the following analysis to evaluate the utility of our proposed method. By applying it to the 15 interviews, we classified the following three topics: core, expansion, and local. In review sessions with interview participants, interview contents and flows were supported by the topic structure.
Journal of Information Processing | 2011
Tomoko Yonezawa; Hirotake Yamazoe; Yuichi Koyama; Shinji Abe; Kenji Mase
ヒューマンインタフェース学会論文誌 | 2011
Tomoko Yonezawa; Hirotake Yamazoe; Yuichi Koyama; Shinji Abe; Kenji Mase