Rosalie M. Uchanski
Washington University in St. Louis
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Rosalie M. Uchanski.
Journal of the Acoustical Society of America | 1994
Karen Payton; Rosalie M. Uchanski; Louis D. Braida
The effect of articulating clearly on speech intelligibility is analyzed for ten normal-hearing and two hearing-impaired listeners in noisy, reverberant, and combined environments. Clear speech is more intelligible than conversational speech for each listener in every environment. The difference in intelligibility due to speaking style increases as noise and/or reverberation increase. The average difference in intelligibility is 20 percentage points for the normal-hearing listeners and 26 percentage points for the hearing-impaired listeners. Two predictors of intelligibility are used to quantify the environmental degradations: The articulation index (AI) and the speech transmission index (STI). Both are shown to predict, reliably, performance levels within a speaking style for normal-hearing listeners. The AI is unable to represent the reduction in intelligibility scores due to reverberation for the hearing-impaired listeners. Neither predictor can account for the difference in intelligibility due to speaking style.
Ear and Hearing | 2003
Rosalie M. Uchanski; Ann E. Geers
Objective The primary objective of this study was to compare select acoustic characteristics of the speech of deaf children who use cochlear implants (young cochlear implant users) with those of children with normal hearing. A secondary objective of this study was to examine the effect, if any, of the deaf child’s education (oral versus total communication) on the similarity of these acoustic characteristics to those of normal-hearing age-mates. Design Speech was recorded from 181 young cochlear implant users and from 24 children with normal hearing. All speech was produced by imitation, and consisted of complete sentences. Acoustic measures included voice onset time (/t/, /d/), second formant frequency (/i/, /&U0254;/), spectral moments (mean, skew and kurtosis of /s/ and /&U0283;/), a nasal manner metric, and durations (of vowels, words, and sentences). Results and Discussion A large percentage (46 to 97%) of the young cochlear implant users produced acoustic characteristics with values within the range found for children with normal hearing. Exceptions were sentence duration and vowel duration in sentence-initial words, for which only 23 and 25%, respectively, of the COCHLEAR IMPLANT users had values within the normal range. Additionally, for most of the acoustic measures, significantly more COCHLEAR IMPLANT users from oral than from total communication settings had values within the normal range. Conclusions Compared with deaf children with hearing aids (from previous studies by others), deaf children who use cochlear implants have improved speech production skills, as reflected in the acoustic measures of this study. Placement in an oral communication educational setting is also associated with more speech production improvement than placement in a total communication setting.
The Annals of otology, rhinology & laryngology. Supplement | 2002
Ann E. Geers; Chris Brenner; Johanna G. Nicholas; Rosalie M. Uchanski; Nancy Tye-Murray; Emily Tobey
This study was performed to investigate factors contributing to auditory, speech, language, and reading outcomes after 4 to 6 years of multichannel cochlear implant use in children with prelingual deafness. The analysis controlled for the effects of child, family, and implant characteristics so that the educational factors most conducive to maximum implant benefit could be identified. We tested 136 children from across the United States and Canada. All were 8 or 9 years of age, had an onset of deafness before 3 years of age, underwent implantation by 5 years of age, and resided in a monolingual English-speaking home environment. Characteristics of the child and the family (primarily nonverbal IQ) accounted for approximately 20% of the variance in outcome after implantation. An additional 24% was accounted for by implant characteristics and 12% by educational variables, particularly communication mode. Oral education appears to be an important educational choice for children who have undergone cochlear implantation before 5 years of age.
The Annals of otology, rhinology & laryngology. Supplement | 2000
Ann E. Geers; Johanna G. Nicholas; Nancy Tye-Murray; Rosalie M. Uchanski; Chris Brenner; Lisa S. Davidson; Gina Toretta; Emily Tobey
In contrast to predictions by Deaf activists, this group of adolescents with and without cochlear implants had strikingly similar identity beliefs. Both groups indicated a high degree of approval of Bicultural identity statements, which reflect a balanced view of the hearing and deaf cultures. Although the sample was small, inspection of the data indicates that the absolute values and distributional characteristics for the DIDS scores of the two groups were highly similar on all scales except the Hearing identity scale. Because many implant users receive audiological benefit, it is not surprising that they describe emulating the hearing majority as a desirable goal.
Brain Research | 2012
Harold Burton; Jill B. Firszt; Timothy A. Holden; Alvin Agato; Rosalie M. Uchanski
We studied activation magnitudes in core, belt, and parabelt auditory cortex in adults with normal hearing (NH) and unilateral hearing loss (UHL) using an interrupted, single-event design and monaural stimulation with random spectrographic sounds. NH patients had one ear blocked and received stimulation on the side matching the intact ear in UHL. The objective was to determine whether the side of deafness affected lateralization and magnitude of evoked blood oxygen level-dependent responses across different auditory cortical fields (ACFs). Regardless of ear of stimulation, NH showed larger contralateral responses in several ACFs. With right ear stimulation in UHL, ipsilateral responses were larger compared to NH in core and belt ACFs, indicating neuroplasticity in the right hemisphere. With left ear stimulation in UHL, only posterior core ACFs showed larger ipsilateral responses, suggesting that most ACFs in the left hemisphere had greater resilience against reduced crossed inputs from a deafferented right ear. Parabelt regions located posterolateral to core and belt auditory cortex showed reduced activation in UHL compared to NH irrespective of RE/LE stimulation and lateralization of inputs. Thus, the effect in UHL compared to NH differed by ACF and ear of deafness.
Ear and Hearing | 2013
Ann E. Geers; Lisa S. Davidson; Rosalie M. Uchanski; Johanna G. Nicholas
Objectives: This study documented the ability of experienced pediatric cochlear implant (CI) users to perceive linguistic properties (what is said) and indexical attributes (emotional intent and talker identity) of speech, and examined the extent to which linguistic (LSP) and indexical (ISP) perception skills are related. Preimplant-aided hearing, age at implantation, speech processor technology, CI-aided thresholds, sequential bilateral cochlear implantation, and academic integration with hearing age-mates were examined for their possible relationships to both LSP and ISP skills. Design: Sixty 9- to 12-year olds, first implanted at an early age (12 to 38 months), participated in a comprehensive test battery that included the following LSP skills: (1) recognition of monosyllabic words at loud and soft levels, (2) repetition of phonemes and suprasegmental features from nonwords, and (3) recognition of key words from sentences presented within a noise background, and the following ISP skills: (1) discrimination of across-gender and within-gender (female) talkers and (2) identification and discrimination of emotional content from spoken sentences. A group of 30 age-matched children without hearing loss completed the nonword repetition, and talker- and emotion-perception tasks for comparison. Results: Word-recognition scores decreased with signal level from a mean of 77% correct at 70 dB SPL to 52% at 50 dB SPL. On average, CI users recognized 50% of key words presented in sentences that were 9.8 dB above background noise. Phonetic properties were repeated from nonword stimuli at about the same level of accuracy as suprasegmental attributes (70 and 75%, respectively). The majority of CI users identified emotional content and differentiated talkers significantly above chance levels. Scores on LSP and ISP measures were combined into separate principal component scores and these components were highly correlated (r = 0.76). Both LSP and ISP component scores were higher for children who received a CI at the youngest ages, upgraded to more recent CI technology and had lower CI-aided thresholds. Higher scores, for both LSP and ISP components, were also associated with higher language levels and mainstreaming at younger ages. Higher ISP scores were associated with better social skills. Conclusions: Results strongly support a link between indexical and linguistic properties in perceptual analysis of speech. These two channels of information appear to be processed together in parallel by the auditory system and are inseparable in perception. Better speech performance, for both linguistic and indexical perception, is associated with younger age at implantation and use of more recent speech processor technology. Children with better speech perception demonstrated better spoken language, earlier academic mainstreaming, and placement in more typically sized classrooms (i.e., >20 students). Well-developed social skills were more highly associated with the ability to discriminate the nuances of talker identity and emotion than with the ability to recognize words and sentences through listening. The extent to which early cochlear implantation enabled these early-implanted children to make use of both linguistic and indexical properties of speech influenced not only their development of spoken language, but also their ability to function successfully in a hearing world.
Attention Perception & Psychophysics | 1998
Rosalie M. Uchanski; Louis D. Braida
Even when the speaker, context, and speaking style are held fixed, the physical properties of naturally spoken utterances of the same speech sound vary considerably. This variability imposes limits on our ability to distinguish between different speech sounds. We present a conceptual framework for relating the ability to distinguish between speech sounds in single-token experiments (in which each speech sound is represented by a single wave form) to resolution in multiple-token experiments. Experimental results indicate that this ability is substantially reduced by an increase in the number of tokens from 1 to 4, but that there is little further reduction when the number of tokens increases to 16. Furthermore, although there is little relation between the ability to distinguish between a given pair of tokens in the multiple- and the 1-token experiments, there is a modest correlation between the ability to distinguish specific vowel tokens in the 4- and 16-token experiments. These results suggest that while listeners use a multiplicity of cues to distinguish between single tokens of a pair of vowel sounds, so that performance is highly variable both across tokens and listeners, they use a smaller set when distinguishing between populations of naturally produced vowel tokens, so that variability is reduced. The effectiveness of the cues used in the latter case is limited more by internal noise than by the variability of the cues themselves.
Ear and Hearing | 2014
Nai Yuan Nicholas Chang; Meghan M. Hiss; Mark C. Sanders; Osarenoma U. Olomu; Paul R. MacNeilage; Rosalie M. Uchanski; Timothy E. Hullar
Objectives: Quantification of the perceptual thresholds to vestibular stimuli may offer valuable complementary information to that provided by measures of the vestibulo-ocular reflex (VOR). Perceptual thresholds could be particularly important in evaluating some subjects, such as the elderly, who might have a greater potential of central as well as peripheral vestibular dysfunction. The authors hypothesized that perceptual detection and discrimination thresholds would worsen with aging, and that there would be a poor relation between thresholds and traditional measures of the angular VOR represented by gain and phase on rotational chair testing. Design: The authors compared the detection and discrimination thresholds of 19 younger and 16 older adults in response to earth-vertical, 0.5 Hz rotations. Perceptual results of the older subjects were then compared with the gain and phase of their VOR in response to earth-vertical rotations over the frequency range from 0.025 to 0.5 Hz. Results: Detection thresholds were found to be 0.69 ± 0.29 degree/sec (mean ± standard deviation) for the younger participants and 0.81 ± 0.42 degree/sec for older participants. Discrimination thresholds in younger and older adults were 4.83 ± 1.80 degree/sec and 4.33 ± 1.57 degree/sec, respectively. There was no difference in either measure between age groups. Perceptual thresholds were independent of the gain and phase of the VOR. Conclusions: These results indicate that there is no inevitable loss of vestibular perception with aging. Elevated thresholds among the elderly are therefore suggestive of pathology rather than normal consequences of aging. Furthermore, perceptual thresholds offer additional insight, beyond that supplied by the VOR alone, into vestibular function.
Otology & Neurotology | 2016
Laura K. Holden; Jill B. Firszt; Ruth M. Reeder; Rosalie M. Uchanski; Timothy A. Holden
Objective: To identify primary biographic and audiologic factors contributing to cochlear implant (CI) performance variability in quiet and noise by controlling electrode array type and electrode position within the cochlea. Background: Although CI outcomes have improved over time, considerable outcome variability still exists. Biographic, audiologic, and device-related factors have been shown to influence performance. Examining CI recipients with consistent array type and electrode position may allow focused investigation into outcome variability resulting from biographic and audiologic factors. Methods: Thirty-nine adults (40 ears) implanted for at least 6 months with a perimodiolar electrode array known (via computed tomography [CT] imaging) to be in scala tympani participated. Test materials, administered CI only, included monosyllabic words, sentences in quiet and noise, and spectral ripple discrimination. Results: In quiet, scores were high with mean word and sentence scores of 76 and 87%, respectively; however, sentence scores decreased by an average of 35 percentage points when noise was added. A principal components (PC) analysis of biographic and audiologic factors found three distinct factors, PC1 Age, PC2 Duration, and PC3 Pre-op Hearing. PC1 Age was the only factor that correlated, albeit modestly, with speech recognition in quiet and noise. Spectral ripple discrimination strongly correlated with speech measures. Conclusion: For these recipients with consistent electrode position, PC1 Age was related to speech recognition performance. Consistent electrode position may have contributed to high speech understanding in quiet. Inter-subject variability in noise may have been influenced by auditory/cognitive processing, known to decline with age, and mechanisms that underlie spectral resolution ability.
Laryngoscope | 2012
Nai Yuan N Chang; Rosalie M. Uchanski; Timothy E. Hullar
Integration of balance‐related cues from the vestibular and other sensory systems requires that they be perceived simultaneously despite arriving asynchronously at the central nervous system. Failure to perform temporal integration of multiple sensory signals represents a novel mechanism to explain symptoms in patients with imbalance. This study tested the ability of normal observers to compensate for sensory asynchronies between vestibular and auditory inputs.