Masatsune Tamura
Tokyo Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Masatsune Tamura.
international conference on acoustics, speech, and signal processing | 2001
Masatsune Tamura; Takashi Masuko; Keiichi Tokuda; Takao Kobayashi
Describes a technique for synthesizing speech with arbitrary speaker characteristics using speaker independent speech units, which we call average voice units. The technique is based on an HMM-based text-to-speech (TTS) system and maximum likelihood linear regression (MLLR) adaptation algorithm. In the HMM-based TTS system, speech synthesis units are modeled by multi-space probability distribution (MSD) HMMs which can model spectrum and pitch simultaneously in a unified framework. We derive an extension of the MLLR algorithm to apply it to MSD-HMMs. We demonstrate that a few sentences uttered by a target speaker are sufficient to adapt not only voice characteristics but also prosodic features. Synthetic speech generated from adapted models using only four sentences is very close to that from speaker dependent models trained using 450 sentences.
international conference on acoustics speech and signal processing | 1998
Takashi Masuko; Takao Kobayashi; Masatsune Tamura; Jun Masubuchi; Keiichi Tokuda
This paper presents a new technique for synthesizing visual speech from arbitrarily given text. The technique is based on an algorithm for parameter generation from HMM with dynamic features, which has been successfully applied to text-to-speech synthesis. In the training phase, syllable HMMs are trained with visual speech parameter sequences that represent lip movements. In the synthesis phase, a sentence HMM is constructed by concatenating syllable HMMs corresponding to the phonetic transcription for the input text. Then an optimum visual speech parameter sequence is generated from the sentence HMM in an ML sense. The proposed technique can generate synchronized lip movements with speech in a unified framework. Furthermore, coarticulation is implicitly incorporated into the generated mouth shapes. As a result, synthetic lip motion becomes smooth and realistic.
SSW | 1998
Masatsune Tamura; Takashi Masuko; Keiichi Tokuda; Takao Kobayashi
IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences | 2003
Junichi Yamagishi; Masatsune Tamura; Takashi Masuko; Keiichi Tokuda; Takao Kobayashi
conference of the international speech communication association | 2001
Masatsune Tamura; Takashi Masuko; Keiichi Tokuda; Takao Kobayashi
AVSP | 1998
Masatsune Tamura; Takashi Masuko; Takao Kobayashi; Keiichi Tokuda
IEICE Transactions on Information and Systems | 2003
Junichi Yamagishi; Masatsune Tamura; Takashi Masuko; Keiichi Tokuda; Takao Kobayashi
conference of the international speech communication association | 2014
Yamato Ohtani; Masatsune Tamura; Masahiro Morita; Masami Akamine
conference of the international speech communication association | 2002
Junichi Yamagishi; Masatsune Tamura; Takashi Masuko; Keiichi Tokuda; Takao Kobayashi
conference of the international speech communication association | 1999
Masatsune Tamura; Shigekazu Kondo; Takashi Masuko; Takao Kobayashi