Yoichi Sadamoto
Toshiba
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yoichi Sadamoto.
Journal of the Acoustical Society of America | 1995
Yoichi Takebayashi; Hiroyuki Tsuboi; Yoichi Sadamoto; Yasuki Yamashita; Yoshifumi Nagata; Shigenobu Seto; Hideaki Shinchi; Hideki Hashimoto
A speech dialogue system capable of realizing natural and smooth dialogue between the system and a human user, and easy maneuverability of the system. In the system, a semantic content of input speech from a user is understood and a semantic content determination of a response output is made according to the understood semantic content of the input speech. Then, a speech response and a visual response according to the determined response output are generated and outputted to the user. The dialogue between the system and the user is managed by controlling transitions between user states during which the input speech is to be entered and system states during which the system response is to be outputted. The understanding of a semantic content of input speech from a user is made by detecting keywords in the input speech, with the keywords to be detected in the input speech limited in advance, according to a state of a dialogue between the user and the system.
Journal of the Acoustical Society of America | 1990
Hiroyuki Tsuboi; Yoichi Sadamoto; Yoichi Takebayashi
This paper describes the continuous multiple similarity method (CMSM) that utilizes stable and dynamic phoentic feature vectors to achieve continuous speech recognition. The phonetic feature vector to represent the time‐varying characteristics of both stable and dynamic segments is constructed by a fixed dimensional time‐frequency spectrum. Stable phonetic segments correspond to the stable portion of five vowels, fricatives and nasals. Dynamic phonetic segments correspond to the time‐variant portion of stops, semivowels, and liquids. The stable phonetic segments are represented by a time‐series of the time‐frequency spectrum, whereas the dynamic phonetic segments are represented by a single typical time‐frequency spectrum. The multiple similarity (MS) values corresponding to the particular phone class are time‐continuously computed. The time sequence of the MS values are then used for word matching with word graphs by dynamic programming. An experiment was carried out on 6596 phonetic segments of 492‐word...
Archive | 1992
Miwako Doi; Ikiko Nishida; Mitsuo Saito; Yoichi Sadamoto; Kenichi Mori
Archive | 1994
Miwako Doi; Ikiko Nishida; Yoichi Sadamoto
IEICE Transactions on Information and Systems | 1993
Yoichi Takebayashi; Hiroyuki Tsuboi; Hiroshi Kanazawa; Yoichi Sadamoto; Hideki Hashimoto; Hideaki Shinchi
Archive | 1992
Yoichi Takebayashi; Hiroyuki Tsuboi; Yoichi Sadamoto; Yasuki Yamashita; Yoshifumi Nagata; Shigenobu Seto; Hideaki Shinchi; Hideki Hashimoto
conference of the international speech communication association | 1992
Yoichi Takebayashi; Hiroyuki Tsubo; Yoichi Sadamoto; Hideki Hashimoto; Hideaki Shinchi
Archive | 1992
Yoichi Takebayashi; Hiroyuki Tsuboi; Yoichi Sadamoto; Yasuki Yamashita; Yoshifumi Nagata; Shigenobu Seto; Hideaki Shinchi; Hideki Hashimoto
IEICE Transactions on Information and Systems | 1993
Yoichi Takebayashi; Hiroyuki Tsuboi; Hiroshi Kanazawa; Yoichi Sadamoto; Hideki Hashimoto; Hideaki Shinchi
Archive | 1992
Yoichi Takebayashi; Hiroyuki Tsuboi; Yoichi Sadamoto; Yasuki Yamashita; Yoshifumi Nagata; Shigenobu Seto; Hideaki Shinchi; Hideki Hashimoto