Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yoichi Sadamoto is active.

Publication


Featured researches published by Yoichi Sadamoto.


Journal of the Acoustical Society of America | 1995

Speech dialogue system for facilitating improved human-computer interaction

Yoichi Takebayashi; Hiroyuki Tsuboi; Yoichi Sadamoto; Yasuki Yamashita; Yoshifumi Nagata; Shigenobu Seto; Hideaki Shinchi; Hideki Hashimoto

A speech dialogue system capable of realizing natural and smooth dialogue between the system and a human user, and easy maneuverability of the system. In the system, a semantic content of input speech from a user is understood and a semantic content determination of a response output is made according to the understood semantic content of the input speech. Then, a speech response and a visual response according to the determined response output are generated and outputted to the user. The dialogue between the system and the user is managed by controlling transitions between user states during which the input speech is to be entered and system states during which the system response is to be outputted. The understanding of a semantic content of input speech from a user is made by detecting keywords in the input speech, with the keywords to be detected in the input speech limited in advance, according to a state of a dialogue between the user and the system.


Journal of the Acoustical Society of America | 1990

Continuous multiple similarity method using stable and dynamic phonetic feature vectors for continuous speech recognition

Hiroyuki Tsuboi; Yoichi Sadamoto; Yoichi Takebayashi

This paper describes the continuous multiple similarity method (CMSM) that utilizes stable and dynamic phoentic feature vectors to achieve continuous speech recognition. The phonetic feature vector to represent the time‐varying characteristics of both stable and dynamic segments is constructed by a fixed dimensional time‐frequency spectrum. Stable phonetic segments correspond to the stable portion of five vowels, fricatives and nasals. Dynamic phonetic segments correspond to the time‐variant portion of stops, semivowels, and liquids. The stable phonetic segments are represented by a time‐series of the time‐frequency spectrum, whereas the dynamic phonetic segments are represented by a single typical time‐frequency spectrum. The multiple similarity (MS) values corresponding to the particular phone class are time‐continuously computed. The time sequence of the MS values are then used for word matching with word graphs by dynamic programming. An experiment was carried out on 6596 phonetic segments of 492‐word...


Archive | 1992

Presentation support system

Miwako Doi; Ikiko Nishida; Mitsuo Saito; Yoichi Sadamoto; Kenichi Mori


Archive | 1994

Presentation support environment system

Miwako Doi; Ikiko Nishida; Yoichi Sadamoto


IEICE Transactions on Information and Systems | 1993

A Real-Time Speech Dialogue System Using Spontaneous Speech Understanding

Yoichi Takebayashi; Hiroyuki Tsuboi; Hiroshi Kanazawa; Yoichi Sadamoto; Hideki Hashimoto; Hideaki Shinchi


Archive | 1992

Speech dialogue system for facilitating human-computer interaction

Yoichi Takebayashi; Hiroyuki Tsuboi; Yoichi Sadamoto; Yasuki Yamashita; Yoshifumi Nagata; Shigenobu Seto; Hideaki Shinchi; Hideki Hashimoto


conference of the international speech communication association | 1992

A real-time speech dialogue system using spontaneous speech understanding.

Yoichi Takebayashi; Hiroyuki Tsubo; Yoichi Sadamoto; Hideki Hashimoto; Hideaki Shinchi


Archive | 1992

Sprach-Dialog-System zur Erleichterung von Rechner-Mensch-Wechselwirkung

Yoichi Takebayashi; Hiroyuki Tsuboi; Yoichi Sadamoto; Yasuki Yamashita; Yoshifumi Nagata; Shigenobu Seto; Hideaki Shinchi; Hideki Hashimoto


IEICE Transactions on Information and Systems | 1993

A Real-Time Speech Dialogue System Using Spontaneous Speech Understanding (Special Issue on Speech and Discourse Processing in Dialogue Systems)

Yoichi Takebayashi; Hiroyuki Tsuboi; Hiroshi Kanazawa; Yoichi Sadamoto; Hideki Hashimoto; Hideaki Shinchi


Archive | 1992

Sprach-Dialog-System zur Erleichterung von Rechner-Mensch-Wechselwirkung Speech dialog system to facilitate computer-human interaction

Yoichi Takebayashi; Hiroyuki Tsuboi; Yoichi Sadamoto; Yasuki Yamashita; Yoshifumi Nagata; Shigenobu Seto; Hideaki Shinchi; Hideki Hashimoto

Collaboration


Dive into the Yoichi Sadamoto's collaboration.

Researchain Logo
Decentralizing Knowledge