Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ann R. Bradlow is active.

Publication


Featured researches published by Ann R. Bradlow.


Attention Perception & Psychophysics | 1999

Training Japanese listeners to identify English /r/and /l /: Long-term retention of learning in perception and production

Ann R. Bradlow; Reiko Akahane-Yamada; David B. Pisoni; Yoh’ichi Tohkura

Previous work from our laboratories has shown that monolingual Japanese adults who were given intensive high-variability perceptual training improved in both perception and production of English /r/-/l/ minimal pairs. In this study, we extended those findings by investigating the long-term retention of learning in both perception and production of this difficult non-native contrast. Results showed that 3 months after completion of the perceptual training procedure, the Japanese trainees maintained their improved levels of performance on the perceptual identification task. Furthermore, perceptual evaluations by native American English listeners of the Japanese trainees’ pretest, posttest, and 3-month follow-up speech productions showed that the trainees retained their long-term improvements in the general quality, identifiability, and overall intelligibility of their English /r/-/l/ word productions. Taken together, the results provide further support for the efficacy of high-variability laboratory speech sound training procedures, and suggest an optimistic outlook for the application of such procedures for a wide range of “special populations.” nt]mis|This work was supported by NIDCD Training Grant DC-00012 and by NIDCD Research Grant DC-00111 to Indiana University.


Journal of the Acoustical Society of America | 2002

The clear speech effect for non-native listeners

Ann R. Bradlow; Tessa Bent

Previous work has established that naturally produced clear speech is more intelligible than conversational speech for adult hearing-impaired listeners and normal-hearing listeners under degraded listening conditions. The major goal of the present study was to investigate the extent to which naturally produced clear speech is an effective intelligibility enhancement strategy for non-native listeners. Thirty-two non-native and 32 native listeners were presented with naturally produced English sentences. Factors that varied were speaking style (conversational versus clear), signal-to-noise ratio (-4 versus -8 dB) and talker (one male versus one female). Results showed that while native listeners derived a substantial benefit from naturally produced clear speech (an improvement of about 16 rau units on a keyword-correct count), non-native listeners exhibited only a small clear speech effect (an improvement of only 5 rau units). This relatively small clear speech effect for non-native listeners is interpreted as a consequence of the fact that clear speech is essentially native-listener oriented, and therefore is only beneficial to listeners with extensive experience with the sound structure of the target language.


Language and Cognitive Processes | 2012

Speech recognition in adverse conditions: A review

Sven L. Mattys; Matthew H. Davis; Ann R. Bradlow; Sophie K. Scott

This article presents a review of the effects of adverse conditions (ACs) on the perceptual, linguistic, cognitive, and neurophysiological mechanisms underlying speech recognition. The review starts with a classification of ACs based on their origin: Degradation at the source (production of a noncanonical signal), degradation during signal transmission (interfering signal or medium-induced impoverishment of the target signal), receiver limitations (peripheral, linguistic, cognitive). This is followed by a parallel, yet orthogonal classification of ACs based on the locus of their effect: Perceptual processes, mental representations, attention, and memory functions. We then review the added value that ACs provide for theories of speech recognition, with a focus on fundamental themes in psycholinguistics: Content and format of lexical representations, time-course of lexical access, word segmentation, feed-back in speech perception and recognition, lexical-semantic integration, interface between the speech system and general cognition, neuroanatomical organisation of speech processing. We conclude by advocating an approach to speech recognition that includes rather than neutralises complex listening environments and individual differences.


Jaro-journal of The Association for Research in Otolaryngology | 2000

Consequences of neural asynchrony: a case of auditory neuropathy.

Nina Kraus; Ann R. Bradlow; Mary Ann Cheatham; Jenna Cunningham; Cynthia King; Dawn Burton Koch; Trent Nicol; Therese McGee; Laszlo Stein; Beverly A. Wright

AbstractAbstract The neural representation of sensory events depends upon neural synchrony. Auditory neuropathy, a disorder of stimulus-timing-related neural synchrony, provides a model for studying the role of synchrony in auditory perception. This article presents electrophysiological and behavioral data from a rare case of auditory neuropathy in a woman with normal hearing thresholds, making it possible to separate audibility from neuropathy. The experimental results, which encompass a wide range of auditory perceptual abilities and neurophysiologic responses to sound, provide new information linking neural synchrony with auditory perception. Findings illustrate that optimal eighth nerve and auditory brainstem synchrony do not appear to be essential for understanding speech in quiet listening situations. However, synchrony is critical for understanding speech in the presence of noise.


Journal of the Acoustical Society of America | 2004

Production and perception of clear speech in Croatian and English

Rajka Smiljanic; Ann R. Bradlow

Previous research has established that naturally produced English clear speech is more intelligible than English conversational speech. The major goal of this paper was to establish the presence of the clear speech effect in production and perception of a language other than English, namely Croatian. A systematic investigation of the conversational-to-clear speech transformations across languages with different phonological properties (e.g., large versus small vowel inventory) can provide a window into the interaction of general auditory-perceptual and phonological, structural factors that contribute to the high intelligibility of clear speech. The results of this study showed that naturally produced clear speech is a distinct, listener-oriented, intelligibility-enhancing mode of speech production in both languages. Furthermore, the acoustic-phonetic features of the conversational-to-clear speech transformation revealed cross-language similarities in clear speech production strategies. In both languages, talkers exhibited a decrease in speaking rate and an increase in pitch range, as well as an expansion of the vowel space. Notably, the findings of this study showed equivalent vowel space expansion in English and Croatian clear speech, despite the difference in vowel inventory size across the two languages, suggesting that the extent of vowel contrast enhancement in hyperarticulated clear speech is independent of vowel inventory size.


Attention Perception & Psychophysics | 1999

Effects of talker, rate, and amplitude variation on recognition memory for spoken words

Ann R. Bradlow; Lynne C. Nygaard; David B. Pisoni

This study investigated the encoding of the surface form of spoken words using a continuous recognition memory task. The purpose was to compare and contrast three sources of stimulus variability—talker, speaking rate, and overall amplitude—to determine the extent to which each source of variability is retained in episodic memory. In Experiment 1, listeners judged whether each word in a list of spoken words was “old” (had occurred previously in the list) or “new.” Listeners were more accurate at recognizing a word as old if it was repeated by the same talker and at the same speaking rate; however, there was no recognition advantage for words repeated at the same overall amplitude. In Experiment 2, listeners were first asked to judge whether each word was old or new, as before, and then they had to explicitly judge whether it was repeated by the same talker, at the same rate, or at the same amplitude. On the first task, listeners again showed an advantage in recognition memory for words repeated by the same talker and at same speaking rate, but no advantage occurred for the amplitude condition. However, in all three conditions, listeners were able to explicitly detect whether an old word was repeated by the same talker, at the same rate, or at the same amplitude. These data suggest that although information about all three properties of spoken words is encoded and retained in memory, each source of stimulus variation differs in the extent to which it affects episodic memory for spoken words.


Journal of the Acoustical Society of America | 2007

Semantic and phonetic enhancements for speech-in-noise recognition by native and non-native listeners

Ann R. Bradlow; Jennifer A. Alexander

Previous research has shown that speech recognition differences between native and proficient non-native listeners emerge under suboptimal conditions. Current evidence has suggested that the key deficit that underlies this disproportionate effect of unfavorable listening conditions for non-native listeners is their less effective use of compensatory information at higher levels of processing to recover from information loss at the phoneme identification level. The present study investigated whether this non-native disadvantage could be overcome if enhancements at various levels of processing were presented in combination. Native and non-native listeners were presented with English sentences in which the final word varied in predictability and which were produced in either plain or clear speech. Results showed that, relative to the low-predictability-plain-speech baseline condition, non-native listener final word recognition improved only when both semantic and acoustic enhancements were available (high-predictability-clear-speech). In contrast, the native listeners benefited from each source of enhancement separately and in combination. These results suggests that native and non-native listeners apply similar strategies for speech-in-noise perception: The crucial difference is in the signal clarity required for contextual information to be effective, rather than in an inability of non-native listeners to take advantage of this contextual information per se.


Clinical Neurophysiology | 2008

Deficient brainstem encoding of pitch in children with Autism Spectrum Disorders

Nicole Russo; Erika Skoe; Barbara L. Trommer; Trent Nicol; Steven G. Zecker; Ann R. Bradlow; Nina Kraus

OBJECTIVE Deficient prosody is a hallmark of the pragmatic (socially contextualized) language impairment in Autism Spectrum Disorders (ASD). Prosody communicates emotion and intention and is conveyed through acoustic cues such as pitch contour. Thus, the objective of this study was to examine the subcortical representations of prosodic speech in children with ASD. METHODS Using passively evoked brainstem responses to speech syllables with descending and ascending pitch contours, we examined sensory encoding of pitch in children with ASD who had normal intelligence and hearing and were age-matched with typically developing (TD) control children. RESULTS We found that some children on the autism spectrum show deficient pitch tracking (evidenced by increased Frequency and Slope Errors and reduced phase locking) compared with TD children. CONCLUSIONS This is the first demonstration of subcortical involvement in prosody encoding deficits in this population of children. SIGNIFICANCE Our findings may have implications for diagnostic and remediation strategies in a subset of children with ASD and open up an avenue for future investigations.


Journal of the Acoustical Society of America | 1999

Effects of lengthened formant transition duration on discrimination and neural representation of synthetic CV syllables by normal and learning-disabled children

Ann R. Bradlow; Nina Kraus; Trent Nicol; Therese McGee; Jenna Cunningham; Steven G. Zecker; Thomas D. Carrell

In order to investigate the precise acoustic features of stop consonants that pose perceptual difficulties for some children with learning problems, discrimination thresholds along two separate synthetic /da-ga/ continua were compared in a group of children with learning problems (LP) and a group of normal children. The continua differed only in the duration of the formant transitions. Results showed that simply lengthening the formant transition duration from 40 to 80 ms did not result in improved discrimination thresholds for the LP group relative to the normal group. Consistent with previous findings, an electrophysiologic response that is known to reflect the brains representation of a change from one auditory stimulus to another--the mismatch negativity (MMN)--indicated diminished responses in the LP group relative to the normal group to /da/ versus /ga/ when the transition duration was 40 ms. In the lengthened transition duration condition the MMN responses from the LP group were more similar to those from the normal group, and were enhanced relative to the short transition duration condition. These data suggest that extending the duration of the critical portion of the acoustic stimulus can result in enhanced encoding at a preattentive neural level; however, this stimulus manipulation on its own is not a sufficient acoustic enhancement to facilitate increased perceptual discrimination of this place-of-articulation contrast.


Journal of Experimental Psychology: Human Perception and Performance | 2006

The Influence of Linguistic Experience on the Cognitive Processing of Pitch in Speech and Nonspeech Sounds.

Tessa Bent; Ann R. Bradlow; Beverly A. Wright

In the present experiment, the authors tested Mandarin and English listeners on a range of auditory tasks to investigate whether long-term linguistic experience influences the cognitive processing of nonspeech sounds. As expected, Mandarin listeners identified Mandarin tones significantly more accurately than English listeners; however, performance did not differ across the listener groups on a pitch discrimination task requiring fine-grained discrimination of simple nonspeech sounds. The crucial finding was that cross-language differences emerged on a nonspeech pitch contour identification task: The Mandarin listeners more often misidentified flat and falling pitch contours than the English listeners in a manner that could be related to specific features of the sound structure of Mandarin, which suggests that the effect of linguistic experience extends to nonspeech processing under certain stimulus and task conditions.

Collaboration


Dive into the Ann R. Bradlow's collaboration.

Top Co-Authors

Avatar

Rajka Smiljanic

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nina Kraus

Northwestern University

View shared research outputs
Top Co-Authors

Avatar

Midam Kim

Northwestern University

View shared research outputs
Top Co-Authors

Avatar

David B. Pisoni

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Trent Nicol

Northwestern University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge