Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Keiichi Yasu is active.

Publication


Featured researches published by Keiichi Yasu.


IEEE Transactions on Audio, Speech, and Language Processing | 2010

Using Steady-State Suppression to Improve Speech Intelligibility in Reverberant Environments for Elderly Listeners

Takayuki Arai; Nao Hodoshima; Keiichi Yasu

Reverberation is a large problem for speech communication and it is known that strong reverberation affects speech intelligibility. This is especially true for people with hearing impairments and/or elderly people. Several approaches have been proposed and discussed to improve speech intelligibility degraded by reverberation. Steady-state suppression is one such approach, in which speech signals are processed before being radiated through the loudspeakers of a public address system to reduce overlap masking, which is one of the major causes of degradation in speech intelligibility. We investigated whether the steady-state suppression technique improves the intelligibility of speech in reverberant environments for elderly listeners. In both simulated and actual reverberant environments, elderly listeners performed worse than younger listeners. The performance in an actual hall was better than with simulated reverberation, and this was consistent with the results for younger listeners. Although the normal hearing group performed better than the presbycusis group, the steady-state suppression technique improved the intelligibility of speech for elderly listeners as was observed for younger listeners in both simulated and actual reverberant environments.


international conference on computers helping people with special needs | 2018

A Preliminary Observation on the Effect of Visual Information in Learning Environmental Sounds for Deaf and Hard of Hearing People

Yuu Kato; Rumi Hiraga; Daisuke Wakatsuki; Keiichi Yasu

Environmental sounds (ES) give us fundamental information about the world around us and are essential to our safety as well as our perception of everyday life. With improvements in noise reduction techniques, hearing aids and cochlear implants show better performance in the understanding of speech by deaf and hard of hearing (DHH) people. On the other hand, DHH children have little chance to learn ES. We are developing an ES learning system with visual information so that those with severely handicapped children could enjoy learning ES. In this paper, we describe a preliminary observation of including visual information as well as sound in learning ES.


Journal of the Acoustical Society of America | 2016

Perception of geminate consonants and devoiced vowels in Japanese by elderly and young listeners

Eri Iwagami; Takayuki Arai; Keiichi Yasu; Emi Yanagisawa; Kei Kobayashi

The goal of this study is to compare the perception of geminate consonants and devoiced vowels by elderly and young Japanese speakers. This study consisted of two perceptual experiments (Exp. 1and Exp. 2). In Exp.1, materials were nonsense words “ata” and “atta” having the structures: ‘V1CV2’, ‘V1C:V2’, and the following three parameters were changed: 1) formant transition of V1 (FT), 2) V1 duration, and 3) closure duration. In Exp. 2, materials were 27 Japanese words whose vowels can be devoiced. The results of Exp. 1 showed that compared with young listeners the perceptual boundary of geminate for elderly listeners shifted towards longer closure duration in the absence of FT. Moreover, the results of Exp. 2 showed that the misperception rate of young listeners was low with and without devoicing, whereas the rate was high with devoicing and low without devoicing in elderly listeners.


Journal of the Acoustical Society of America | 2016

Articulation rates of people who do and do not stutter during oral reading and speech shadowing

Rongna A; Keiko Ochi; Keiichi Yasu; Naomi Sakai; Koichi Mori

Purpose: Previous studies indicate that people who stutter (PWS) speak more slowly than people who do not stutter (PWNS), even in the fluent utterances. The present study compared the articulation rates of PWS and PWNS in two different conditions: oral reading and speech shadowing in order to elucidate the factor that affect the speech rate in PWS. Method: All participants were instructed to read aloud a text and to shadow a model speech without seeing its transcript. The articulation rate (mora per second) was analyzed with an open-source speech recognition engine “Julius” version 4.3.1 (https://github.com/julius-speech/julius). The pauses and disfluencies were excluded from the calculation of the articulation rate in the present study. Results: The mean articulation rate of PWS was significantly lower than that of PWNS only in oral reading, but not in speech shadowing. PWS showed a significantly faster articulation rate, comparable to that of the model speech, in shadowing than in oral reading, while PW...


Journal of the Acoustical Society of America | 2016

Relationship between auditory degradation and fricatives/affricates production and perception in elderly listeners

Keiichi Yasu; Takayuki Arai; Kei Kobayashi; Mitsuko Shindo

Elderly people often complain that they struggle with consonant identification when listening to spoken language. In previous studies, we conducted several experiments, including identification tests for young and elderly listeners using /shi/-/chi/ (CV) and /ishi/-/ichi/ (VCV) continua. We also recorded production of such CV and VCV syllables. For the CV stimuli, confusion of /shi/ as /chi/ increased when the frication had a long rise time. The degree of confusion increased when auditory property degradation was observed such as threshold elevation in high frequency. In the VCV stimuli, confusion of /ichi/ as /ishi/ occurred for a long silent interval between the first V and C with auditory property degradation. In the present study, we analyzed the utterances of those CV and VCV syllables and measured the duration of frication and silent interval. The direction of the boundary shift in the perception of fricatives and affricates by auditory degradation was consistent with that of production. We found th...


IEEE Transactions on Audio, Speech, and Language Processing | 2012

Errata to “Using Steady-State Suppression to Improve Speech Intelligibility in Reverberant Environments for Elderly Listeners” [Sep 10 1775-1780]

Takayuki Arai; Nao Hodoshima; Keiichi Yasu

In the above titled paper (ibid., vol. 18, no. 7, pp. 1775-1780, Sep. 2010), several IPA fonts were displayed incorrectly. The correct fonts are presented here.


Journal of the Acoustical Society of America | 2008

Critical‐band compression method of speech enhancement for elderly people: Investigation of syllable and word intelligibility

Keiichi Yasu; Hideki Ishida; Ryosuke Takahashi; Takayuki Arai; Kei Kobayashi; Mitsuko Shindo

Auditory filters for the hearing impaired tend to be wider than those of normal hearing people. Thus, the frequency selectivity decreases because of increased masking eects [Glasberg and Moore, J. Acoust. Soc. Am., 79(4), 1020-1033, 1986]. We have developed a method, called ”critical-band compression,” in which the critical band is compressed along the frequency axis [Yasu et al., Handbook of the International Hearing Aid Research Conference (IHCON), 55, Lake Tahoe, 2004]. We investigated whether our method improves syllable and word intelligibility. Thirty one elderly people participated in experiments. First, we measured the auditory filter bandwidth using a notched noise method [Patterson, J. Acoust. Soc. Am., 59(3), 640-654, 1976]. Second, we conducted syllable and word intelligibility tests. The compression rates of critical-band compression were set to 0% for the original, and 25%, 50%, and 75%. The results were that the percentages of correct responses were almost the same at 0%, 25% and 50% compression rates for syllable and word intelligibility. A significant correlation was not obtained between the compression rate of processing and the auditory filter bandwidth. [Work supported by JSPS.KAKENHI (16203041) and Sophia University Open Research Center from MEXT.]


Journal of the Acoustical Society of America | 2008

Developing a bilingual communication aid for a Japanese ALS patient using voice conversion technique

Akemi Iida; Shimpei Kajima; Keiichi Yasu; John M. Kominek; Yasuhiro Aikawa; Takayuki Arai

A bilingual communication aid for a Japanese amyotrophic lateral sclerosis (ALS) patient has been developed. From our previous research, a corpus-based speech synthesis method was ideal for synthesizing speech with voice quality identifiable as the patient’s own. However, a recording of a large amount of speech, which is a burden for the patient, is required for such system. In this study, a voice conversion technique was applied so that a smaller amount of recording is needed for synthesis. An English speech synthesis system with the patient’s voice was developed using Festival, a corpus-based speech synthesizer with voice conversion technique. Two methods for Japanese speech synthesis were attempted using HTS toolkit. The first used an acoustic model built from all 503 recordings of the patient. The second used an acoustic model built from 503 wavefiles of which voice was converted to the patient’s from a native speaker’s. The latter method requires fewer recordings of the patient’s. The result of the perceptual experiment showed that the voice synthesized with the latter was perceived to have a closer voice quality to the patient’s natural speech. Lastly, GUI on windows was developed for the patient to synthesize speech by typing in the text.


Acoustical Science and Technology | 2004

Critical-band based frequency compression for digital hearing aids

Keiichi Yasu; Masato Hishitani; Takayuki Arai; Yuji Murahara


conference of the international speech communication association | 2005

A preprocessing technique for improving speech intelligibility in reverberant environments: the effect of steady-state suppression on elderly people.

Yusuke Miyauchi; Nao Hodoshima; Keiichi Yasu; Nahoko Hayashi; Takayuki Arai; Mitsuko Shindo

Collaboration


Dive into the Keiichi Yasu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge