Sharon Miller
University of Minnesota
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sharon Miller.
Developmental Science | 2011
Yang Zhang; Tess K. Koerner; Sharon Miller; Zach Grice-Patil; Adam Svec; David Akbari; Liz Tusler; Edward Carney
Speech scientists have long proposed that formant exaggeration in infant-directed speech plays an important role in language acquisition. This event-related potential (ERP) study investigated neural coding of formant-exaggerated speech in 6-12-month-old infants. Two synthetic /i/ vowels were presented in alternating blocks to test the effects of formant exaggeration. ERP waveform analysis showed significantly enhanced N250 for formant exaggeration, which was more prominent in the right hemisphere than the left. Time-frequency analysis indicated increased neural synchronization for processing formant-exaggerated speech in the delta band at frontal-central-parietal electrode sites as well as in the theta band at frontal-central sites. Minimum norm estimates further revealed a bilateral temporal-parietal-frontal neural network in the infant brain sensitive to formant exaggeration. Collectively, these results provide the first evidence that formant expansion in infant-directed speech enhances neural activities for phonetic encoding and language learning.
Journal of the Acoustical Society of America | 2010
Sharon Miller; Robert S. Schlauch
Previous studies have documented that speech with flattened or inverted fundamental frequency (F0) contours is less intelligible than speech with natural variations in F0. The purpose of this present study was to further investigate how F0 manipulations affect speech intelligibility in background noise. Speech recognition in noise was measured for sentences having the following F0 contours: unmodified, flattened at the median, natural but exaggerated, inverted, and sinusoidally frequency modulated at rates of 2.5 and 5.0 Hz, rates shown to make vowels more perceptually salient in background noise. Five talkers produced 180 stimulus sentences, with 30 unique sentences per F0 contour condition. Flattening or exaggerating the F0 contour reduced key word recognition performance by 13% relative to the naturally produced speech. Inverting or sinusoidally frequency modulating the F0 contour reduced performance by 23% relative to typically produced speech. These results support the notion that linguistically incorrect or misleading cues have a greater deleterious effect on speech understanding than linguistically neutral cues.
Neuroscience Letters | 2014
Sharon Miller; Yang Zhang
Auditory event-related potentials (ERPs) collected from cochlear implant (CI) users are often contaminated by large electrical device-related artifacts. Using independent component analysis (ICA), the artifacts can be manually identified and removed, and the ERP responses can be reconstructed from the remaining components. Viola et al. [17] recently developed an efficient algorithm that uses spatial and temporal statistics of the components to automate CI artifact removal. The purpose of this study was to perform an independent validation of the algorithm. We further assessed whether the ERP responses were stable over the course of one year when analyzed manually or using the semi-automated approach. To achieve these aims, we collected EEG data from 6 adult CI users at two sessions, with one year between each session. We compared their ERP responses reconstructed using the algorithm and the manual approach. We found no significant differences when comparing the two approaches to removing CI artifact across sessions, validating the use of the semi-automated method.
Journal of the Acoustical Society of America | 2005
Robert S. Schlauch; Sharon Miller
Laures and Weismer [JSLHR, 42, 1148 (1999)] reported that speech with natural variation in fundamental frequency (F0) is more intelligible in noise than speech with a flattened F0 contour. Cognitive‐linguistic based explanations have been offered to account for this drop in intelligibility for the flattened condition, but a lower‐level mechanism related to auditory streaming may be responsible. Numerous psychoacoustic studies have demonstrated that modulating a tone enables a listener to segregate it from background sounds. To test these rival hypotheses, speech recognition in noise was measured for sentences with six different F0 contours: unmodified, flattened at the mean, natural but exaggerated, reversed, and frequency modulated (rates of 2.5 and 5.0 Hz). The 180 stimulus sentences were produced by five talkers (30 sentences per condition). Speech recognition for fifteen listeners replicate earlier findings showing that flattening the F0 contour results in a roughly 10% reduction in recognition of key...
Journal of Speech Language and Hearing Research | 2016
Sharon Miller; Yang Zhang; Peggy B. Nelson
PURPOSE This study implemented a pretest-intervention-posttest design to examine whether multiple-talker identification training enhanced phonetic perception of the /ba/-/da/ and /wa/-/ja/ contrasts in adult listeners who were deafened postlingually and have cochlear implants (CIs). METHOD Nine CI recipients completed 8 hours of identification training using a custom-designed training package. Perception of speech produced by familiar talkers (talkers used during training) and unfamiliar talkers (talkers not used during training) was measured before and after training. Five additional untrained CI recipients completed identical pre- and posttests over the same time course as the trainees to control for procedural learning effects. RESULTS Perception of the speech contrasts produced by the familiar talkers significantly improved for the trained CI listeners, and effects of perceptual learning transferred to unfamiliar talkers. Such training-induced significant changes were not observed in the control group. CONCLUSION The data provide initial evidence of the efficacy of the multiple-talker identification training paradigm for CI users who were deafened postlingually. This pattern of results is consistent with enhanced phonemic categorization of the trained speech sounds.
Ear and Hearing | 2016
Sharon Miller; Yang Zhang; Peggy B. Nelson
Objective: The present training study aimed to examine the fine-scale behavioral and neural correlates of phonetic learning in adult postlingually deafened cochlear implant (CI) listeners. The study investigated whether high variability identification training improved phonetic categorization of the /ba/–/da/ and /wa/–/ja/ speech contrasts and whether any training-related improvements in phonetic perception were correlated with neural markers associated with phonetic learning. It was hypothesized that training would sharpen phonetic boundaries for the speech contrasts and that changes in behavioral sensitivity would be associated with enhanced mismatch negativity (MMN) responses to stimuli that cross a phonetic boundary relative to MMN responses evoked using stimuli from the same phonetic category. Design: A computer-based training program was developed that featured multitalker variability and adaptive listening. The program was designed to help CI listeners attend to the important second formant transition cue that categorizes the /ba/–/da/ and /wa/–/ja/ contrasts. Nine adult CI listeners completed the training and 4 additional CI listeners that did not undergo training were included to assess effects of procedural learning. Behavioral pre-post tests consisted of identification and discrimination of the synthetic /ba/–/da/ and /wa/–/ja/ speech continua. The electrophysiologic MMN response elicited by an across phoneme category pair and a within phoneme category pair that differed by an acoustically equivalent amount was derived at pre-post test intervals for each speech contrast as well. Results: Training significantly enhanced behavioral sensitivity across the phonetic boundary and significantly altered labeling of the stimuli along the /ba/–/da/ continuum. While training only slightly altered identification and discrimination of the /wa/–/ja/ continuum, trained CI listeners categorized the /wa/–/ja/ contrast more efficiently than the /ba/–/da/ contrast across pre-post test sessions. Consistent with behavioral results, pre-post EEG measures showed the MMN amplitude to the across phoneme category pair significantly increased with training for both the /ba/–/da/ and /wa/–/ja/ contrasts, but the MMN was unchanged with training for the corresponding within phoneme category pairs. Significant brain–behavior correlations were observed between changes in the MMN amplitude evoked by across category phoneme stimuli and changes in the slope of identification functions for the trained listeners for both speech contrasts. Conclusions: The brain and behavior data of the present study provide evidence that substantial neural plasticity for phonetic learning in adult postlingually deafened CI listeners can be induced by high variability identification training. These findings have potential clinical implications related to the aural rehabilitation process following receipt of a CI device.
Journal of the Acoustical Society of America | 2010
Yang Zhang; Sharon Miller; Tess K. Koerner; Edward Carney
Speech scientists have long proposed that formant‐exaggerated speech plays an important role in phonetic learning and language acquisition. However, there have been very little neurophysiological data on how the infant brain and adult brain respond to formant exaggeration in speech. We employed event‐related potentials (ERPs) to investigate neural coding of formant‐exaggerated speech sounds. Two synthetic /i/ vowels were modeled after infant‐directed speech data and presented in alternating blocks to test the effects of formant exaggeration. The fundamental frequencies of the two sounds were kept identical to avoid interference from exaggerated pitch level and range. For adult subjects, non‐speech homologs were also created by using the center frequencies of the formants to additionally test whether the effects were speech‐specific. In the infants (6 to 12‐month olds), ERP waveforms showed significantly enhanced N250 and sustaining negativity responses for processing formant‐exaggerated speech. In adults,...
Hearing Research | 2017
Sharon Miller; Kaci Wathen; Elizabeth Cash; Teresa Pitts; Lynzee Cornell
HighlightsP1 gating significantly predicted Acceptable Noise Level.Greater acceptance of background noise is associated with enhanced sensory gating to repeated auditory stimuli.Cortical inhibition is a key mechanism underlying behavioral Acceptable Noise Level.
Journal of the Acoustical Society of America | 2010
Sharon Miller; Yang Zhang
The present study used auditory event‐related potential (ERP) measures to investigate how the adult brain differentially processes fricative speech sounds with and without the use of a hearing aid. Synthetic stimuli for /sa/, /sha/, /as/, and /ash/ were created to control for the spectral cues in the vowel and consonant portions based on naturally recorded speech. ERP responses were recorded for each sound in an unaided and a hearing aid condition using a randomized block design. At least 160 trials per stimulus were averaged for each sound per subject. The results indicated that (1) the ERP responses in the unaided condition significantly differed from the aided condition, (2) N1 peak amplitudes and latencies to /s/ and /sh/ significantly differed in both unaided and aided conditions as well as in syllable‐initial and syllable final positions, and (3) phonological context significantly affected N1‐P2 responses. Despite some minor differences, these results are consistent with our previous ERP study using...
Hearing Research | 2010
Aparna Rao; Yang Zhang; Sharon Miller