Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Catherine L. Rogers is active.

Publication


Featured researches published by Catherine L. Rogers.


Applied Psycholinguistics | 2006

Effects of bilingualism, noise, and reverberation on speech perception by listeners with normal hearing

Catherine L. Rogers; Jennifer J. Lister; Dashielle M. Febo; Joan Besing; Harvey Abrams

This study compared monosyllabic word recognition in quiet, noise, and noise with reverberation for 15 monolingual American English speakers and 12 Spanish–English bilinguals who had learned English prior to 6 years of age and spoke English without a noticeable foreign accent. Significantly poorer word recognition scores were obtained for the bilingual listeners than for the monolingual listeners under conditions of noise and noise with reverberation, but not in quiet. Although bilinguals with little or no foreign accent in their second language are often assumed by their peers, or their clinicians in the case of hearing loss, to be identical in perceptual abilities to monolinguals, the present data suggest that they may have greater difficulty in recognizing words in noisy or reverberant listening environments.


Language and Speech | 2004

Effects of Noise and Proficiency on Intelligibility of Chinese-Accented English.

Catherine L. Rogers; Jonathan M. Dalby; Kanae Nishi

This study compared the intelligibility of native and foreign-accented English speech presented in quiet and mixed with three different levels of background noise. Two native American English speakers and four native Mandarin Chinese speakers for whom English is a second language each read a list of 50 phonetically balanced sentences (Egan, 1948). The authors identified two of the Mandarin-accented English speakers as high-proficiency speakers and two as lower proficiency speakers, based on their intelligibility in quiet (about 95% and 80%, respectively). Original recordings and noise-masked versions of 48 utterances were presented to monolingual American English speakers. Listeners were asked to write down the words they heard the speakers say, and intelligibility was measured as content words correctly identified. While there was a modest difference between native and high-proficiency speech in quiet (about 7%), it was found that adding noise to the signal reduced the intelligibility of high-proficiency accented speech significantly more than it reduced the intelligibility of native speech. Differences between the two groups in the three added noise conditions ranged from about 12% to 33%. This result suggests that even high-proficiency non-native speech is less robust than native speech when it is presented to listeners under suboptimal conditions.


Journal of the Acoustical Society of America | 2008

Perception of silent-center syllables by native and non-native English speakers

Catherine L. Rogers; Alexandra S. Lopez

The amount of acoustic information that native and non-native listeners need for syllable identification was investigated by comparing the performance of monolingual English speakers and native Spanish speakers with either an earlier or a later age of immersion in an English-speaking environment. Duration-preserved silent-center syllables retaining 10, 20, 30, or 40 ms of the consonant-vowel and vowel-consonant transitions were created for the target vowels /i, I, eI, epsilon, ae/ and /a/, spoken by two males in /bVb/ context. Duration-neutral syllables were created by editing the silent portion to equate the duration of all vowels. Listeners identified the syllables in a six-alternative forced-choice task. The earlier learners identified the whole-word and 40 ms duration-preserved syllables as accurately as the monolingual listeners, but identified the silent-center syllables significantly less accurately overall. Only the monolingual listener group identified syllables significantly more accurately in the duration-preserved than in the duration-neutral condition, suggesting that the non-native listeners were unable to recover from the syllable disruption sufficiently to access the duration cues in the silent-center syllables. This effect was most pronounced for the later learners, who also showed the most vowel confusions and the greatest decrease in performance from the whole word to the 40 ms transition condition.


Journal of the Acoustical Society of America | 1994

Intelligibility training for foreign‐accented speech: A preliminary study

Catherine L. Rogers; Jonathan M. Dalby; Gladys DeVane

A computer‐based speech training system, the Indiana Speech Training Aid (ISTRA), has been shown to be clinically effective for improving the speech of hearing‐impaired and phonologically disordered individuals [Kewley‐Port et al., Clin. Ling. Phon. 5, 13–38 (1991)]. The potential value of speech recognition technology for the training of foreign‐accented speech, using an overall measure of speech quality as feedback, was assessed. Phonological errors in English spoken by two native Mandarin speakers were analyzed and several training targets selected. One consonant and one vowel contrast each were selected for training using ISTRA, and by a speech‐language pathologist (SLP). Pre‐ and posttraining tokens were rated for quality by a listener jury. Results showed significant improvement for both the ISTRA‐trained consonant /l/, and the SLP‐trained consonant /θ/. However, only one of the two vowel contrasts for each speaker improved significantly. ISTRA‐trained sounds which showed significant improvement did...


Journal of the Acoustical Society of America | 1996

Prediction of foreign‐accented speech intelligibility from segmental contrast measures

Catherine L. Rogers; Jonathan M. Dalby

The intelligibility of foreign‐accented English was investigated using minimal‐pairs contrasts probing a number of different error types. Forty‐four native English‐speaking listeners were presented with English words, sentences, and a brief passage produced by one of eight native speakers of Mandarin Chinese or one native English speaker. The 190 words were presented to listeners in a minimal‐pairs forced‐choice task. For the sentences and passage, listeners were instructed to write down what they understood. A feature‐based analysis of the minimal‐pairs data was performed, with percent correct scores computed for each feature. The sentence and passage data, scored as percent of content words correctly transcribed by listeners, were transformed and used as the dependent variables in two multiple regression analyses, with seven feature scores from the minimal‐pairs test (four consonant and three vowel features) used as the independent variables. The seven minimal‐pairs variables accounted for approximately...


Journal of the Acoustical Society of America | 2001

Effects of noise and proficiency level on intelligibility of Chinese‐accented English

Catherine L. Rogers; Jonathan M. Dalby; Kanae Nishi

It is known that native speech intelligibility is degraded in background noise. This study compares the effect of noise on the intelligibility of English sentences produced by native English speakers and two groups of native Mandarin speakers with different English proficiency levels. High‐proficiency Mandarin speakers spoke with detectible accents, but their speech was transcribed at about 95% of words correct in a previous study, in which no noise was added [C. Rogers and J. Dalby, J. Acoust. Soc. Am. 100, 2725 (1996)]. Low‐proficiency Mandarin speakers were transcribed at about 80% correct in the same study. Forty‐eight sentences spoken by six speakers (two native, two high proficiency, and two low proficiency) were transcribed by listeners under four conditions: with no added noise and mixed with multi‐talker babble at three signal‐to‐noise ratios (+10, 0, and −5 dB). Transcription accuracy was poor for all speakers in the noisiest condition, although substantially greater for native than for Mandarin...


Journal of the Acoustical Society of America | 2010

Vowel identification by younger and older listeners: Relative effectiveness of vowel edges and vowel centers

Gail S. Donaldson; Elizabeth Talmage; Catherine L. Rogers

Young normal-hearing (YNH) and older normal-hearing (ONH) listeners identified vowels in naturally produced /bVb/ syllables and in modified syllables that consisted of variable portions of the vowel edges (silent-center [SC] stimuli) or vowel center (center-only [CO] stimuli). Listeners achieved high levels of performance for all but the shortest stimuli, indicating that they were able to access vowel cues throughout the syllable. ONH listeners performed similarly to YNH listeners for most stimuli, but performed more poorly for the shortest CO stimuli. SC and CO stimuli were equally effective in supporting vowel identification except when acoustic information was limited to 20 ms.


Journal of the Acoustical Society of America | 2009

Intelligibility of Spanish‐accented English words in noise.

Jonathan M. Dalby; Catherine L. Rogers

The intelligibility of Mandarin‐accented English sentences, even those spoken by highly proficient non‐native speakers, is degraded more than is native speech when presented to native listeners in noise [Rogers et al. (2004)]. Comprehension of accented speech may require more processing time than native speech even when presented in quiet [Munro and Derwing (1995)]. These effects are similar to effects found by Pisoni and his colleagues for synthetic, as compared to natural speech [Winters and Pisoni (2003)] and together suggest that the ability of native listeners to adapt relatively quickly and effectively to accented speech [Bradlow and Bent (2008); Clark and Garrett (2004)] may come at the expense of increased cognitive effort. The present study examines the effects of noise on the intelligibility of Mandarin‐accented isolated words from speakers representing a wide range of oral English proficiency based on connected‐speech measures. A subset of these words, those with the highest open‐set identifica...


Journal of the Acoustical Society of America | 2009

Spoken word recognition in quiet and noise by native and non‐native listeners: Effects of age of immersion and vocabulary size.

Astrid Zerla Doty; Catherine L. Rogers; Judith Becker Bryant

In spoken word recognition, high‐frequency words with few and less frequently occurring minimal‐pair “neighbors” (lexically easy words) are recognized more accurately than low‐frequency words with many and more frequently occurring neighbors (lexically hard words). [Bradlow and Pisoni, J. Acoust. Soc. Am.,106, 2074–2085 (1999)] found a larger “easy‐hard” word effect for non‐native than native speakers of English. The present study extends this work by specifically comparing word recognition by non‐native listeners with either earlier (age 10 or earlier) or later (age 14 or later) ages of immersion in an English‐speaking environment to that of native English speakers. Listeners heard six lists of 24 words, each composed of 12 lexically easy and 12 lexically hard target words in an open‐set word‐identification task. Word lists were presented in quiet and in moderate background noise. A substantially larger easy‐hard word effect was obtained only for the later learners, but a measure of oral vocabulary size ...


Journal of the Acoustical Society of America | 2007

Clear speech effects for vowels produced by monolingual and bilingual talkers

Teresa M. DeMasi; Catherine L. Rogers; Jean C. Krause

The present study investigates the hypothesis that bilinguals may produce a smaller intelligibility benefit than monolinguals when asked to speak clearly. Three groups of talkers were recorded: 13 monolingual native English speakers, 22 early Spanish‐English bilinguals, with an age of onset of learning English (AOL) of 12 or earlier, and 14 later Spanish‐English bilinguals, with an AOL of 15 or later. Talkers produced the target words ‘‘bead, bid, bayed, bed, bad’’ and ‘‘bod’’ in both clear and conversational speech styles. Two repetitions of each target word were mixed with noise and presented to monolingual English‐speaking listeners in a six‐alternative forced‐choice task across two days of testing. Stimuli were also presented in quiet on two subsequent days of testing. In preliminary data from 13 listeners, the early bilinguals were slightly more intelligible in noise than the monolingual talkers, with both groups showing a similar degree of clear speech benefit. Later bilinguals were less intelligibl...

Collaboration


Dive into the Catherine L. Rogers's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Teresa M. DeMasi

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Jean C. Krause

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Kanae Nishi

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Michelle Bianchi

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Astrid Zerla Doty

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Jennifer J. Lister

University of South Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge