Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kanae Nishi is active.

Publication


Featured researches published by Kanae Nishi.


Journal of the Acoustical Society of America | 2007

Children’s recognition of American English consonants in noise

Kanae Nishi; Dawna E. Lewis; Brenda Hoover; Sangsook Choi; Patricia G. Stelmachowicz

In contrast to the availability of consonant confusion studies with adults, to date, no investigators have compared childrens consonant confusion patterns in noise to those of adults in a single study. To examine whether childrens error patterns are similar to those of adults, three groups of children (24 each in 4-5, 6-7, and 8-9 yrs. old) and 24 adult native speakers of American English (AE) performed a recognition task for 15 AE consonants in /ɑ/-consonant-/ɑ/ nonsense syllables presented in a background of speech-shaped noise. Three signal-to-noise ratios (SNR: 0, +5, and +10 dB) were used. Although the performance improved as a function of age, the overall consonant recognition accuracy as a function of SNR improved at a similar rate for all groups. Detailed analyses using phonetic features (manner, place, and voicing) revealed that stop consonants were the most problematic for all groups. In addition, for the younger children, front consonants presented in the 0 dB SNR condition were more error prone than others. These results suggested that childrens use of phonetic cues do not develop at the same rate for all phonetic features.


Ear and Hearing | 2010

Effects of digital noise reduction on speech perception for children with hearing loss

Patricia G. Stelmachowicz; Dawna E. Lewis; Brenda Hoover; Kanae Nishi; Ryan W. McCreery; William Woods

Objective: Although numerous studies have investigated the effects of single-microphone digital noise-reduction algorithms for adults with hearing loss, similar studies have not been conducted with young hearing-impaired children. The goal of this study was to examine the effects of a commonly used digital noise-reduction scheme (spectral subtraction) in children with mild to moderately severe sensorineural hearing losses. It was hypothesized that the process of spectral subtraction may alter or degrade speech signals in some way. Such degradation may have little influence on the perception of speech by hearing-impaired adults who are likely to use contextual information under such circumstances. For young children who are still developing various language skills, however, signal degradation may have a more detrimental effect on the perception of speech. Design: Sixteen children (eight 5- to 7-yr-olds and eight 8- to 10-yr-olds) with mild to moderately severe hearing loss participated in this study. All participants wore binaural behind the ear hearing aids where noise-reduction processing was performed independently in 16 bands with center frequencies spaced 500 Hz apart up to 7500 Hz. Test stimuli were nonsense syllables, words, and sentences in a background of noise. For all stimuli, data were obtained with noise reduction (NR) on and off conditions. Results: In general, performance improved as a function of speech to noise ratio for all three speech materials. The main effect for stimulus type was significant and post hoc comparisons of stimulus type indicated that speech recognition was higher for sentences than that for both nonsense syllables and words, but no significant differences were observed between nonsense syllables and words. The main effect for NR and the two-way interaction between NR and stimulus type were not significant. Significant age group effects were observed, but the two-way interaction between NR and age group was not significant. Conclusions: Consistent with previous findings from studies with adults, results suggest that the form of NR used in this study does not have a negative effect on the overall perception of nonsense syllables, words, or sentences across the age range (5 to 10 yrs) and speech to noise ratios (0, +5, and +10 dB) tested.


Journal of the Acoustical Society of America | 2005

The influence of different native language systems on vowel discrimination and identification

Diane Kewley-Port; Ocke-Schwen Bohn; Kanae Nishi

The ability to identify the vowel sounds of a language reliably is dependent on the ability to discriminate between vowels at a more sensory level. This study examined how the complexity of the vowel systems of three native languages (L1) influenced listeners perception of American English (AE) vowels. AE has a fairly complex vowel system with 11 monophthongs. In contrast, Japanese has only 5 spectrally different vowels, while Swedish has 9 and Danish has 12. Six listeners, with exposure of less than 4 months in English speaking environments, participated from each L1. Their performance in two tasks was compared to 6 AE listeners. As expected, there were large differences in a linguistic identification task using 4 confusable AE low vowels. Japanese listeners performed quite poorly compared to listeners with more complex L1 vowel systems. Thresholds for formant discrimination for the 3 groups were very similar to those of native AE listeners. Thus it appears that sensory abilities for discriminating vowel...


Journal of the Acoustical Society of America | 2001

Effects of noise and proficiency level on intelligibility of Chinese‐accented English

Catherine L. Rogers; Jonathan M. Dalby; Kanae Nishi

It is known that native speech intelligibility is degraded in background noise. This study compares the effect of noise on the intelligibility of English sentences produced by native English speakers and two groups of native Mandarin speakers with different English proficiency levels. High‐proficiency Mandarin speakers spoke with detectible accents, but their speech was transcribed at about 95% of words correct in a previous study, in which no noise was added [C. Rogers and J. Dalby, J. Acoust. Soc. Am. 100, 2725 (1996)]. Low‐proficiency Mandarin speakers were transcribed at about 80% correct in the same study. Forty‐eight sentences spoken by six speakers (two native, two high proficiency, and two low proficiency) were transcribed by listeners under four conditions: with no added noise and mixed with multi‐talker babble at three signal‐to‐noise ratios (+10, 0, and −5 dB). Transcription accuracy was poor for all speakers in the noisiest condition, although substantially greater for native than for Mandarin...


Journal of the Acoustical Society of America | 2009

Learn to Listen (L2L): Perception training system for learners of English as a second language.

Diane Kewley-Port; Kanae Nishi; Hanyong Park; James D. Miller; Charles S. Watson

Computer software (L2L) is being developed for comprehensive perception training of English by second language learners. Our goal is to facilitate generalization of post‐training improvement of phoneme perception to the perception of running speech. Three studies are reported for two groups of adult listeners, one Korean and the other Spanish. In study 1, large sets of confusable phonemes were identified from an assessment task for each group. Then training sets for consonants in CV nonsense syllables and vowels in familiar real words were selected and recordings from multiple talkers were obtained. Materials for the word‐in‐sentence (WIS) task were developed with a single low‐context carrier phrase which contained three words from the vowel training. In study 2 new Korean and Spanish listeners were trained using a protocol that included a pre‐test, eight hours of training, a post‐test, and one hour using the WIS task. Training results showed: (1) both Korean and Spanish listeners improved from pre‐ to po...


Journal of the Acoustical Society of America | 2005

Training Japanese listeners to identify American English vowels

Kanae Nishi; Diane Kewley-Port

Perception training of phonemes by second language (L2) learners has been studied primarily using consonant contrasts, where the number of contrasting sounds rarely exceeds five. In order to investigate the effects of stimulus sets, this training study used two conditions: 9 American English vowels covering the entire vowel space (9V), and 3 difficult vowels for problem‐focused training (3V). Native speakers of Japanese were trained for nine days. To assess changes in performance due to training, a battery of perception and production tests were given pre‐ and post‐training, as well as 3 months following training. The 9V trainees improved vowel perception on all vowels after training, on average by 23%. Their performance at the 3‐month test was slightly worse than the posttest, but still better than the pretest. Transfer of training effect to stimuli spoken by new speakers was observed. Strong response bias observed in the pretest disappeared after the training. The preliminary results of the 3V trainees ...


Ear and Hearing | 2017

Effect of Context and Hearing Loss on Time-Gated Word Recognition in Children

Dawna E. Lewis; Judy G. Kopun; Ryan W. McCreery; Marc A. Brennan; Kanae Nishi; Evan Cordrey; Patricia G. Stelmachowicz; Mary Pat Moeller

Objectives: The purpose of this study was to examine word recognition in children who are hard of hearing (CHH) and children with normal hearing (CNH) in response to time-gated words presented in high- versus low-predictability sentences (HP, LP), where semantic cues were manipulated. Findings inform our understanding of how CHH combine cognitive-linguistic and acoustic-phonetic cues to support spoken word recognition. It was hypothesized that both groups of children would be able to make use of linguistic cues provided by HP sentences to support word recognition. CHH were expected to require greater acoustic information (more gates) than CNH to correctly identify words in the LP condition. In addition, it was hypothesized that error patterns would differ across groups. Design: Sixteen CHH with mild to moderate hearing loss and 16 age-matched CNH participated (5 to 12 years). Test stimuli included 15 LP and 15 HP age-appropriate sentences. The final word of each sentence was divided into segments and recombined with the sentence frame to create series of sentences in which the final word was progressively longer by the gated increments. Stimuli were presented monaurally through headphones and children were asked to identify the target word at each successive gate. They also were asked to rate their confidence in their word choice using a five- or three-point scale. For CHH, the signals were processed through a hearing aid simulator. Standardized language measures were used to assess the contribution of linguistic skills. Results: Analysis of language measures revealed that the CNH and CHH performed within the average range on language abilities. Both groups correctly recognized a significantly higher percentage of words in the HP condition than in the LP condition. Although CHH performed comparably with CNH in terms of successfully recognizing the majority of words, differences were observed in the amount of acoustic-phonetic information needed to achieve accurate word recognition. CHH needed more gates than CNH to identify words in the LP condition. CNH were significantly lower in rating their confidence in the LP condition than in the HP condition. CHH, however, were not significantly different in confidence between the conditions. Error patterns for incorrect word responses across gates and predictability varied depending on hearing status. Conclusions: The results of this study suggest that CHH with age-appropriate language abilities took advantage of context cues in the HP sentences to guide word recognition in a manner similar to CNH. However, in the LP condition, they required more acoustic information (more gates) than CNH for word recognition. Differences in the structure of incorrect word responses and their nomination patterns across gates for CHH compared with their peers with NH suggest variations in how these groups use limited acoustic information to select word candidates.


Journal of the Acoustical Society of America | 1997

Acoustic comparison of the effects of coarticulation on the production of Japanese and American English vowels

James J. Jenkins; Winifred Strange; Kanae Nishi; Brett H. Fitzgerald; Sonja A. Trent; David H. Thornton

Cross‐language similarities and differences in the acoustic variability of vowels as a function of speaking ‘‘style’’ (citation versus sentences) and phonetic context were explored by comparing the productions of four adult male native speakers each of American English (AE) and Japanese. Multiple instances of the 11 AE vowels /i, ■, e■, e, ae, ■, ■, ■, o■, ■, u/ and the 10 Japanese vowels /i, ii, e, ee, a, aa, o, oo, ■, ■■/ produced in citation‐form bisyllables in /hVba/ and in CVC syllables /bVb, bVp, dVd, gVg, gVk/ imbedded in a carrier sentence were analyzed. Formant values at three temporal locations within the vocalic nucleus (25%, 50%, 75%) of the CVC syllables were compared with ‘‘canonical’’ /hVba/ values to determine the amount of ‘‘target undershoot’’ and changes in dynamic formant structure as a function of consonantal context. Vocalic durations were measured to determine the extent to which speaking style and consonantal context influenced relative vowel length information in the two language...


Ear and Hearing | 2016

Testing Speech Recognition in Spanish-English Bilingual Children with the Computer-Assisted Speech Perception Assessment (CASPA): Initial Report.

Paula García; Lydia Rosado Rogers; Kanae Nishi

This study evaluated the English version of Computer-Assisted Speech Perception Assessment (E-CASPA) with Spanish-English bilingual children. E-CASPA has been evaluated with monolingual English speakers ages 5 years and older, but it is unknown whether a separate norm is necessary for bilingual children. Eleven Spanish-English bilingual and 12 English monolingual children (6 to 12 years old) with normal hearing participated. Responses were scored by word, phoneme, consonant, and vowel. Regardless of scores, performance across three signal-to-noise ratio conditions was similar between groups, suggesting that the same norm can be used for both bilingual and monolingual children.


Journal of the Acoustical Society of America | 2003

Acoustic comparisons of Japanese and English vowels produced by native speakers of Japanese

Kanae Nishi; Reiko Akahane-Yamada; Rieko Kubo; Winifred Strange

This study explored acoustic similarities/differences between Japanese (J) and American English (AE) vowels produced by native J speakers and compared production patterns to their perceptual assimilation of AE vowels [Strange et al., J. Phonetics 26, 311–344 (1998)]. Eight male native J speakers who had served as listeners in Strange et al. produced 18 Japanese (J) vowels (5 long‐short pairs, 2 double vowels, and 3 long‐short palatalized pairs) and 11 American English (AE) vowels in /hVbopena/ disyllables embedded in a carrier sentence. Acoustical parameters included formant frequencies at syllable midpoint (F1/F2/F3), formant change from 25% to 75% points in syllable (formant change), and vocalic duration. Results of linear discriminant analyses showed rather poor acoustic differentiation of J vowel categories when F1/F2/F3 served as input variables (60% correct classification), which greatly improved when duration and formant change were added. In contrast, correct classification of J speakers’ AE vowel...

Collaboration


Dive into the Kanae Nishi's collaboration.

Top Co-Authors

Avatar

Winifred Strange

City University of New York

View shared research outputs
Top Co-Authors

Avatar

Sonja A. Trent

University of South Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rieko Kubo

Japan Advanced Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Diane Kewley-Port

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Charles S. Watson

Indiana University Bloomington

View shared research outputs
Researchain Logo
Decentralizing Knowledge