Yosuke Matsusaka
Waseda University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yosuke Matsusaka.
robot and human interactive communication | 2004
Shinya Fujie; Yasushi Ejiri; Kei Nakajima; Yosuke Matsusaka; Tetsunori Kobayashi
A conversation robot that recognizes users head gestures and uses its results as para-linguistic information is developed. In the conversation, humans exchange linguistic information, which can be obtained by transcription of the utterance, and para-linguistic information, which helps the transmission of linguistic information. Para-linguistic information brings a nuance that cannot be transmitted by linguistic information, and the natural and effective conversation is realized. We recognize users head gestures as the para-linguistic information in the visual channel. We use the optical flow over the head region as the feature and model them using HMM for the recognition. In actual conversation, while the user performs a gesture, the robot may perform a gesture, too. In this situation, the image sequence captured by the camera mounted on the eyes of the robot includes sways caused by the movement of the camera. To solve this problem, we introduced two artifices. One is for the feature extraction: the optical flow of the body area is used to compensate the swayed images. The other is for the probability models: mode-dependent models are prepared by the MLLR model adaptation technique, and the models are switched according to the motion mode of the robot. Experimental results show the effectiveness of these techniques.
intelligent robots and systems | 2002
Kyeong Ju Kim; Yosuke Matsusaka; Tetsunori Kobayashi
We designed an inter-module cooperation architecture that enables the collaborative development of interactive robots. In the bazaar-like development model, each module is developed by an individual developer and the total system is developed by the cooperation of these developers. To realize smooth collaboration of these developers under bazaar-like model, inter-module cooperation architecture is required to avoid the confliction of modules and select the appropriate modules according to the situations and the tasks. For this aim, we introduce a priority based cooperation mechanism for modules. The priority for each module under the various situations is, described in a situated-priority description script (SPDS). Flexible module selection is realized by the modification of the SPDS under the various situations and the tasks. We also evaluate the efficiency of proposed architecture through the development of the multi-modal conversation robot.
conference of the international speech communication association | 1999
Yosuke Matsusaka; Tsuyoshi Tojo; Sentaro Kubota; Kenji Furukawa; Daisuke Tamiya; Keisuke Hayata; Yuichiro Nakano; Tetsunori Kobayashi
systems man and cybernetics | 2000
Tsuyoshi Tojo; Yosuke Matsusaka; Tomotada Ishii; Tetsunori Kobayashi
conference of the international speech communication association | 2001
Yosuke Matsusaka; Shinya Fujie; Tetsunori Kobayashi
IEICE Transactions on Information and Systems | 2003
Yosuke Matsusaka; Tsuyoshi Tojo; Tetsunori Kobayashi
Archive | 1999
Yosuke Matsusaka; Sen Kubota; Tsuyoshi Tojo; Kazuro Furukawa; Takashi Kobayashi
Archive | 2001
Yosuke Matsusaka; Takashi Kobayashi
ieee automatic speech recognition and understanding workshop | 2003
Shinya Fujie; Yasushi Ejiri; Yosuke Matsusaka; Hideaki Kikuchi; Tetsunori Kobayashi
conference of the international speech communication association | 2005
Yosuke Matsusaka