Jonathan M. Dalby
Indiana University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jonathan M. Dalby.
Language and Speech | 2004
Catherine L. Rogers; Jonathan M. Dalby; Kanae Nishi
This study compared the intelligibility of native and foreign-accented English speech presented in quiet and mixed with three different levels of background noise. Two native American English speakers and four native Mandarin Chinese speakers for whom English is a second language each read a list of 50 phonetically balanced sentences (Egan, 1948). The authors identified two of the Mandarin-accented English speakers as high-proficiency speakers and two as lower proficiency speakers, based on their intelligibility in quiet (about 95% and 80%, respectively). Original recordings and noise-masked versions of 48 utterances were presented to monolingual American English speakers. Listeners were asked to write down the words they heard the speakers say, and intelligibility was measured as content words correctly identified. While there was a modest difference between native and high-proficiency speech in quiet (about 7%), it was found that adding noise to the signal reduced the intelligibility of high-proficiency accented speech significantly more than it reduced the intelligibility of native speech. Differences between the two groups in the three added noise conditions ranged from about 12% to 33%. This result suggests that even high-proficiency non-native speech is less robust than native speech when it is presented to listeners under suboptimal conditions.
Journal of the Acoustical Society of America | 1994
Catherine L. Rogers; Jonathan M. Dalby; Gladys DeVane
A computer‐based speech training system, the Indiana Speech Training Aid (ISTRA), has been shown to be clinically effective for improving the speech of hearing‐impaired and phonologically disordered individuals [Kewley‐Port et al., Clin. Ling. Phon. 5, 13–38 (1991)]. The potential value of speech recognition technology for the training of foreign‐accented speech, using an overall measure of speech quality as feedback, was assessed. Phonological errors in English spoken by two native Mandarin speakers were analyzed and several training targets selected. One consonant and one vowel contrast each were selected for training using ISTRA, and by a speech‐language pathologist (SLP). Pre‐ and posttraining tokens were rated for quality by a listener jury. Results showed significant improvement for both the ISTRA‐trained consonant /l/, and the SLP‐trained consonant /θ/. However, only one of the two vowel contrasts for each speaker improved significantly. ISTRA‐trained sounds which showed significant improvement did...
Journal of the Acoustical Society of America | 2004
James D. Miller; Jonathan M. Dalby; Charles S. Watson; Deborah F. Burleson
Five experienced hearing‐aid users with sensorineural hearing loss were given 14 h of intensive training identifying consonants in quiet and noise. Their performance was compared to that of five similar hearing‐aid users with no special training. All listeners had moderate to severe hearing losses and had worn hearing aids for at least 1 year. All were pretested with a set of 20 consonants combined with three vowels /I,a,u/ as spoken by six different talkers. Pretests were conducted in quiet and in noise (multitalker babble) at moderate signal‐to‐noise ratios (SNRs). Training was conducted with eight target consonants (TCs). The TCs were in each listener’s middle range of difficulty, and the three most common confusors for each target were individually selected, forming target sets of four consonants. Training was conducted in quiet and noise. During training, trial‐by‐trial feedback was given and, following an error, the listener could rapidly compare the intended syllable with its confusor. In noise, th...
Journal of the Acoustical Society of America | 1996
Catherine L. Rogers; Jonathan M. Dalby
The intelligibility of foreign‐accented English was investigated using minimal‐pairs contrasts probing a number of different error types. Forty‐four native English‐speaking listeners were presented with English words, sentences, and a brief passage produced by one of eight native speakers of Mandarin Chinese or one native English speaker. The 190 words were presented to listeners in a minimal‐pairs forced‐choice task. For the sentences and passage, listeners were instructed to write down what they understood. A feature‐based analysis of the minimal‐pairs data was performed, with percent correct scores computed for each feature. The sentence and passage data, scored as percent of content words correctly transcribed by listeners, were transformed and used as the dependent variables in two multiple regression analyses, with seven feature scores from the minimal‐pairs test (four consonant and three vowel features) used as the independent variables. The seven minimal‐pairs variables accounted for approximately...
Journal of the Acoustical Society of America | 1994
Keiichi Tajima; Robert F. Port; Jonathan M. Dalby
The reduced intelligibility of foreign‐accented speech is usually attributed to errors in the positioning of the articulators. But how much would correction of their timing improve intelligibility? One way to estimate this effect is to manipulate the acoustic timing of foreign accented speech. Short English phrases spoken by a native Chinese talker were manually edited so as to align the duration of acoustic segments with tokens of the same phrases spoken by a native English talker. Editing methods included: (i) insertion and deletion of pitch periods, frication noise, and silence, and (ii) amplitude reduction for approximation of nasal segments. A group of native American listeners identified each phrase in a two‐alternative forced‐choice task: the correct phrase (e.g., ‘‘equal size’’) versus a distractor phrase suggested by listening to the Chinese production (‘‘you’re concise’’). Performance was only slightly above chance for the unmodified tokens, but reached 80% correct for the modified versions. Per...
Journal of the Acoustical Society of America | 2001
Catherine L. Rogers; Jonathan M. Dalby; Kanae Nishi
It is known that native speech intelligibility is degraded in background noise. This study compares the effect of noise on the intelligibility of English sentences produced by native English speakers and two groups of native Mandarin speakers with different English proficiency levels. High‐proficiency Mandarin speakers spoke with detectible accents, but their speech was transcribed at about 95% of words correct in a previous study, in which no noise was added [C. Rogers and J. Dalby, J. Acoust. Soc. Am. 100, 2725 (1996)]. Low‐proficiency Mandarin speakers were transcribed at about 80% correct in the same study. Forty‐eight sentences spoken by six speakers (two native, two high proficiency, and two low proficiency) were transcribed by listeners under four conditions: with no added noise and mixed with multi‐talker babble at three signal‐to‐noise ratios (+10, 0, and −5 dB). Transcription accuracy was poor for all speakers in the noisiest condition, although substantially greater for native than for Mandarin...
Journal of the Acoustical Society of America | 1996
Keiichi Tajima; Jonathan M. Dalby; Robert F. Port
The prosodic characteristics of utterances produced by second‐language learners were investigated using reiterant speech (RS), a stylized form of speaking in which every syllable is replaced with a standard syllable such as [ma], so that the phrase ‘‘the table’’ is pronounced ‘‘ma MAma.’’ Native speakers of Chinese and Spanish produced RS versions of short phrases in English and in their native language. Preliminary phonetic analysis shows that the English RS tokens were less accurately produced than were the native‐language tokens. A subset of the English tokens were then used as stimuli in a perception test in which native English listeners heard each RS phrase and judged whether it ‘‘matched’’ or ‘‘did not match’’ an English phrase presented to them visually. The stimulus set consisted of English RS produced by both non‐native and native speakers. Preliminary results indicate that listeners were better at judging the match/mismatch of the native English RS tokens than the non‐native tokens. This result...
Journal of the Acoustical Society of America | 2009
Jonathan M. Dalby; Catherine L. Rogers
The intelligibility of Mandarin‐accented English sentences, even those spoken by highly proficient non‐native speakers, is degraded more than is native speech when presented to native listeners in noise [Rogers et al. (2004)]. Comprehension of accented speech may require more processing time than native speech even when presented in quiet [Munro and Derwing (1995)]. These effects are similar to effects found by Pisoni and his colleagues for synthetic, as compared to natural speech [Winters and Pisoni (2003)] and together suggest that the ability of native listeners to adapt relatively quickly and effectively to accented speech [Bradlow and Bent (2008); Clark and Garrett (2004)] may come at the expense of increased cognitive effort. The present study examines the effects of noise on the intelligibility of Mandarin‐accented isolated words from speakers representing a wide range of oral English proficiency based on connected‐speech measures. A subset of these words, those with the highest open‐set identifica...
Journal of the Acoustical Society of America | 1995
Catherine L. Rogers; Jonathan M. Dalby
Although it is generally accepted that a strong foreign accent renders a speaker less intelligible to native listeners, few studies have attempted to investigate specific sources of this deficit. The present study explores the diagnostic effectiveness of a minimal‐pairs test of intelligibility. An inventory of phonemic errors was compiled from careful transcriptions of the spoken English of two native speakers of Mandarin Chinese. Minimal pairs were constructed for each error, using the intended phoneme and the closest English phoneme transcribed. Eight additional native speakers of Mandarin Chinese were recorded reading the target words in the minimal‐pairs list and a set of 20 sentences. The minimal‐pair target words were presented to groups of native listeners in a forced‐choice task; in a second task, listeners were presented with the sentences and asked to write down what they understood. Preliminary results from listener groups for three of the speakers demonstrate (1) no significant differences acr...
Journal of the Acoustical Society of America | 2014
Jonathan M. Dalby; Teresa Barcenas; Tanya August
This study compared the intelligibility of native and foreign-accented American English speech presented in quiet and mixed with two different levels of background noise. Two native American English speakers and two native Mandarin Chinese speakers for whom English is a second language read three 50-word lists of phonetically balanced words (Stuart, 2004). The words were mixed with noise at three different signal-to-noise levels—no noise (quiet), SNR + 10 dB (signal 10 dB louder than noise) and SNR 0 (signal and noise at equal loudness). These stimuli were presented to ten native American English listeners who were simply asked to repeat the words they heard the speakers say. Listener response latencies were measured. The results showed that for both native and accented speech, response latencies increased as the noise level increased. For words identified correctly, response times to accented speech were longer than for native speech but the noise conditions affected both types equally. For words judged ...