Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ruth Y. Litovsky is active.

Publication


Featured researches published by Ruth Y. Litovsky.


Journal of the Acoustical Society of America | 2004

Cochlear implant speech recognition with speech maskers

Ginger S. Stickney; Fan-Gang Zeng; Ruth Y. Litovsky; Peter F. Assmann

Speech recognition performance was measured in normal-hearing and cochlear-implant listeners with maskers consisting of either steady-state speech-spectrum-shaped noise or a competing sentence. Target sentences from a male talker were presented in the presence of one of three competing talkers (same male, different male, or female) or speech-spectrum-shaped noise generated from this talker at several target-to-masker ratios. For the normal-hearing listeners, target-masker combinations were processed through a noise-excited vocoder designed to simulate a cochlear implant. With unprocessed stimuli, a normal-hearing control group maintained high levels of intelligibility down to target-to-masker ratios as low as 0 dB and showed a release from masking, producing better performance with single-talker maskers than with steady-state noise. In contrast, no masking release was observed in either implant or normal-hearing subjects listening through an implant simulation. The performance of the simulation and implant groups did not improve when the single-talker masker was a different talker compared to the same talker as the target speech, as was found in the normal-hearing control. These results are interpreted as evidence for a significant role of informational masking and modulation interference in cochlear implant speech recognition with fluctuating maskers. This informational masking may originate from increased target-masker similarity when spectral resolution is reduced.


Ear and Hearing | 2006

Simultaneous Bilateral Cochlear Implantation in Adults: A Multicenter Clinical Study

Ruth Y. Litovsky; Aaron J. Parkinson; Jennifer Arcaroli; Carol A. Sammeth

Objective: To determine the efficacy of “simultaneous” bilateral cochlear implantation (both implants placed during a single surgical procedure) by comparing bilateral and unilateral implant use in a large number of adult subjects tested at multiple sites. Design: Prospective study of 37 adults with postlinguistic onset of bilateral, severe to profound sensorineural hearing loss. Performance with the bilateral cochlear implants, using the same speech processor type and speech processing strategy, was compared with performance using the left implant alone and the right implant alone. Speech understanding in quiet (CNCs and HINT sentences) and in noise (BKB-SIN Test) were evaluated at several postactivation time intervals, with speech presented at 0° azimuth, and noise at either 0°, 90° right, or 90° left in the horizontal plane. APHAB questionnaire data were collected after each subject underwent a 3-wk “bilateral deprivation” period, during which they wore only the speech processor that produced the best score during unilateral testing, and also after a period of listening again with the bilateral implants. Results: By 6-mo postactivation, a significant advantage for speech understanding in quiet was found in the bilateral listening mode compared with either unilateral listening modes. For speech understanding in noise, the largest and most robust bilateral benefit was when the subject was able to take advantage of the head shadow effect; i.e., results were significantly better for bilateral listening compared with the unilateral condition when the ear opposite to the side of the noise was added to create the bilateral condition. This bilateral benefit was seen on at least one of the two unilateral ear comparisons for nearly all (32/34) subjects. Bilateral benefit was also found for a few subjects in spatial configurations that evaluated binaural redundancy and binaural squelch effects. A subgroup of subjects who had asymmetrical unilateral implant performances were, overall, similar in performance to subjects with symmetrical hearing. The questionnaire data indicated that bilateral users perceive their own performance to be better with bilateral cochlear implants than when using a single device. Conclusions: Findings with a large patient group are in agreement with previous reports on smaller groups, showing that, overall, bilateral implantation offers the majority of patients advantages when listening in simulated adverse conditions.


International Journal of Audiology | 2006

Benefits of bilateral cochlear implants and/or hearing aids in children

Ruth Y. Litovsky; Patti M. Johnstone; Shelly Godar

This study evaluated functional benefits from bilateral stimulation in 20 children ages 4–14, 10 use two CIs and 10 use one CI and one HA. Localization acuity was measured with the minimum audible angle (MAA). Speech intelligibility was measured in quiet, and in the presence of 2-talker competing speech using the CRISP forced-choice test. Results show that both groups perform similarly when speech reception thresholds are evaluated. However, there appears to be benefit (improved MAA and speech thresholds) from wearing two devices compared with a single device that is significantly greater in the group with two CI than in the bimodal group. Individual variability also suggests that some children perform similarly to normal-hearing children, while others clearly do not. Future advances in binaural fitting strategies and improved speech processing schemes that maximize binaural sensitivity will no doubt contribute to increasing the binaurally-driven advantages in persons with bilateral CIs.


Journal of the Acoustical Society of America | 1999

Speech intelligibility and localization in a multi-source environment

Monica L. Hawley; Ruth Y. Litovsky; H. Steven Colburn

Natural environments typically contain sound sources other than the source of interest that may interfere with the ability of listeners to extract information about the primary source. Studies of speech intelligibility and localization by normal-hearing listeners in the presence of competing speech are reported on in this work. One, two or three competing sentences [IEEE Trans. Audio Electroacoust. 17(3), 225-246 (1969)] were presented from various locations in the horizontal plane in several spatial configurations relative to a target sentence. Target and competing sentences were spoken by the same male talker and at the same level. All experiments were conducted both in an actual sound field and in a virtual sound field. In the virtual sound field, both binaural and monaural conditions were tested. In the speech intelligibility experiment, there were significant improvements in performance when the target and competing sentences were spatially separated. Performance was similar in the actual sound-field and virtual sound-field binaural listening conditions for speech intelligibility. Although most of these improvements are evident monaurally when using the better ear, binaural listening was necessary for large improvements in some situations. In the localization experiment, target source identification was measured in a seven-alternative absolute identification paradigm with the same competing sentence configurations as for the speech study. Performance in the localization experiment was significantly better in the actual sound-field than in the virtual sound-field binaural listening conditions. Under binaural conditions, localization performance was very good, even in the presence of three competing sentences. Under monaural conditions, performance was much worse. For the localization experiment, there was no significant effect of the number or configuration of the competing sentences tested. For these experiments, the performance in the speech intelligibility experiment was not limited by localization ability.


Ear and Hearing | 2006

Bilateral cochlear implants in children: Localization acuity measured with minimum audible angle

Ruth Y. Litovsky; Patti M. Johnstone; Shelly Godar; Smita Agrawal; Aaron J. Parkinson; Robert W. Peters; Jennifer Lake

Objective: To evaluate sound localization acuity in a group of children who received bilateral (BI) cochlear implants in sequential procedures and to determine the extent to which BI auditory experience affects sound localization acuity. In addition, to investigate the extent to which a hearing aid in the nonimplanted ear can also provide benefits on this task. Design: Two groups of children participated, 13 with BI cochlear implants (cochlear implant + cochlear implant), ranging in age from 3 to 16 yrs, and six with a hearing aid in the nonimplanted ear (cochlear implant + hearing aid), ages 4 to 14 yrs. Testing was conducted in large sound-treated booths with loudspeakers positioned on a horizontal arc with a radius of 1.5 m. Stimuli were spondaic words recorded with a male voice. Stimulus levels typically averaged 60 dB SPL and were randomly roved between 56 and 64 dB SPL (±4 dB rove); in a few instances, levels were held fixed (60 dB SPL). Testing was conducted by using a “listening game” platform via computerized interactive software, and the ability of each child to discriminate sounds presented to the right or left was measured for loudspeakers subtending various angular separations. Minimum audible angle thresholds were measured in the BI (cochlear implant + cochlear implant or cochlear implant + hearing aid) listening mode and under monaural conditions. Results: Approximately 70% (9/13) of children in the cochlear implant + cochlear implant group discriminated left/right for source separations of ≤20°, and, of those, 77% (7/9) performed better when listening bilaterally than with either cochlear implant alone. Several children were also able to perform the task when using a single cochlear implant, under some conditions. Minimum audible angle thresholds were better in the first cochlear implant than the second cochlear implant listening mode for nearly all (8/9) subjects. Repeated testing of a few individual subjects over a 2-yr period suggests that robust improvements in performance occurred with increased auditory experience. Children who wore hearing aids in the nonimplanted ear were at times also able to perform the task. Average group performance was worse than that of the children with BI cochlear implants when both ears were activated (cochlear implant + hearing aid versus cochlear implant + cochlear implant) but not significantly different when listening with a single cochlear implant. Conclusions: Children with sequential BI cochlear implants represent a unique population of individuals who have undergone variable amounts of auditory deprivation in each ear. Our findings suggest that many but not all of these children perform better on measures of localization acuity with two cochlear implants compared with one and are better at the task than children using the cochlear implant + hearing aid. These results must be interpreted with caution, because benefits on other tasks as well as the long-term benefits of BI cochlear implants are yet to be fully understood. The factors that might contribute to such benefits must be carefully evaluated in large populations of children using a variety of measures.


Ear and Hearing | 2009

Spatial Hearing and Speech Intelligibility in Bilateral Cochlear Implant Users

Ruth Y. Litovsky; Aaron J. Parkinson; Jennifer Arcaroli

Objective: The abilities to localize sounds and segregate speech from interfering sounds in a complex auditory environment were studied in a group of adults who use bilateral cochlear implants. The first aim of the study was to investigate the change in speech intelligibility under bilateral and unilateral listening modes as a function of bilateral experience during the first 6 mo of activation. The second aim was to look at whether localization and speech intelligibility in the presence of interfering speech are correlated and if the relationship is specific to the bilateral listening mode. The third aim was to examine whether sound lateralization (right versus left) emerges before sound localization within a hemifield. Design: Participants were 17 native English speaking adults with postlingual deafness. All subjects received the Nucleus 24 Contour implant in both ears, either during the same surgery or during two separate surgeries that were no more than 1 mo apart. Both devices for each subject were activated at the same time, regardless of surgical approach. Speech intelligibility was measured at 3 and 6 mo after activation. Target speech was presented at 0° in front. Testing was conducted in quiet and in the presence of four-talker babble. The babble was located on the right, on the left, or in front (colocated with the target). Sound localization abilities were measured at the 3 mo interval. All testing was conducted under three listening modes: left ear alone, right ear alone, or bilateral. Results: On the speech-in-babble task, benefit of listening with two ears compared with one was greater when going from 3 to 6 mo of experience. This was evident when the target speech and interfering speech were spatially separated, but not when they were presented from the same location. At 3 mo postactivation of bilateral hearing, 82% of subjects demonstrated bilateral benefit when right/left discrimination was evaluated. In contrast, 47% of subjects showed a bilateral benefit when sound localization was evaluated, suggesting that directional hearing might emerge in a two-step process beginning with discrimination and converging on more fine-grained localization. The bilateral speech intelligibility scores were positively correlated with sound localization abilities, so that listeners who were better able to hear speech in babble were generally better able to identify source locations. Conclusions: During the early stages of bilateral hearing through cochlear implants in postlingually deafened adults, there is an early emergence of spatial hearing skills. Although nearly all subjects can discriminate source locations to the right versus left, less than half are able to perform the more difficult task of identifying source locations in a multispeaker array. Benefits for speech intelligibility with one versus two implants improve with time, in particular when spatial cues are used to segregate speech and competing noise. Localization and speech-in-noise abilities in this group of patients are somewhat correlated.


Otology & Neurotology | 2007

Importance of age and postimplantation experience on speech perception measures in children with sequential bilateral cochlear implants.

B. Robert Peters; Ruth Y. Litovsky; Aaron J. Parkinson; Jennifer Lake

Objectives: Clinical trials in which children received bilateral cochlear implants in sequential operations were conducted to analyze the extent to which bilateral implantation offers benefits on a number of measures. The present investigation was particularly focused on measuring the effects of age at implantation and experience after activation of the second implant on speech perception performance. Study Design: Thirty children aged 3 to 13 years were recipients of 2 cochlear implants, received in sequential operations, a minimum of 6 months apart. All children received their first implant before 5 years of age and had acquired speech perception capabilities with the first device. They were divided into 3 age groups on the basis of age at time of second ear implantation: Group I, 3 to 5 years; Group II, 5.1 to 8 years; and Group III, 8.1 to 13 years. Speech perception measures in quiet included the Multisyllabic Lexical Neighborhood Test (MLNT) for Group I, the Lexical Neighborhood Test (LNT) for Groups II and III, and the Hearing In Noise Test for Children (HINT-C) sentences in quiet for Group III. Speech perception in noise was assessed using the Childrens Realistic Intelligibility and Speech Perception (CRISP) test. Testing was performed preoperatively and again postactivation of the second implant at 3, 6, and 12 months (CRISP at 3 and 9 mo) in both the unilateral and bilateral conditions in a repeated-measures study design. Two-way repeated-measures analysis of variance was used to analyze statistical significance among device configurations and performance over time. Setting: US Multicenter. Results: Results for speech perception in quiet show that children implanted sequentially acquire open-set speech perception in the second ear relatively quickly (within 6 mo). However, children younger than 8 years do so more rapidly and to a higher level of speech perception ability at 12 months than older children (mean second ear MLNT/LNT scores at 12 months: Group I, 83.9%; range, 71-96%; Group II, 59.5%; range, 40-88%; Group III, 32%; range, 12-56%). The second-ear mean HINT-C score for Group III children remained far less than that of the first ear even after 12 months of device use (44 versus 89%; t, 6.48; p < 0.001; critical value, 0.025). Speech intelligibility for spondees in noise was significantly better under bilateral conditions than with either ear alone when all children were analyzed as a single group and for Group III children. At the 9-month test interval, performance in the bilateral configuration was significantly better for all noise conditions (13.2% better for noise at first cochlear implant, 6.8% better for the noise front and noise at second cochlear implant conditions, t = 2.32, p = 0.024, critical level = 0.05 for noise front; t = 3.75, p < 0.0001, critical level = 0.05 for noise at first implant; t = 2.73, p = 0.008, critical level = 0.05 for noise at second implant side). The bilateral benefit in noise increased with time from 3 to 9 months after activation of the second implant. This bilateral advantage is greatest when noise is directed toward the first implanted ear, indicating that the head shadow effect is the most effective binaural mechanism. The bilateral condition produced small improvements in speech perception in quiet and for individual Group I and Group II patient results in noise that, in view of the relatively small number of subjects tested, do not reach statistical significance. Conclusion: Sequential bilateral cochlear implantation in children of diverse ages has the potential to improve speech perception abilities in the second implanted ear and to provide access to the use of binaural mechanisms such as the head shadow effect. The improvement unfolds over time and continues to grow during the 6 to 12 months after activation of the second implant. Younger children in this study achieved higher open-set speech perception scores in the second ear, but older children still demonstrate bilateral benefit in noise. Determining the long-term impact and cost-effectiveness that results from such potential capabilities in bilaterally implanted children requires additional study with larger groups of subjects and more prolonged monitoring.


Journal of Experimental Psychology: Human Perception and Performance | 1991

Object Representation Guides Infants' Reaching in the Dark

Rachel K. Clifton; Philippe Rochat; Ruth Y. Litovsky; Eve E. Perris

Infants were presented with two sounding objects of different sizes in light and dark, in which sound cued the objects identity. Reaching behavior was assessed to determine if object size influenced preparation for grasping the object. In both light and dark, infants aligned their hands when contacting the large object compared with the small object, which resulted in a reach with both hands extended for the large object and reach with one hand more extended for the small object. Infants contacted the large object more frequently on the bottom and sides rather than the top, where the sound source was located. Reaching in the dark by 6 1/2-month-olds is not merely directed toward a sound source but rather shows preparation in relation to the objects size. These findings were interpreted as evidence that mental representation of previously seen objects can guide subsequent motor action by 6 1/2-month-old infants.


Journal of the Acoustical Society of America | 2005

Speech intelligibility and spatial release from masking in young children

Ruth Y. Litovsky

Children between the ages of 4 and 7 and adults were tested in free field on speech intelligibility using a four-alternative forced choice paradigm with spondees. Target speech was presented from front (0 degrees); speech or modulated speech-shaped-noise competitors were either in front or on the right (90 degrees). Speech reception thresholds were measured adaptively using a three-down/one-up algorithm. The primary difference between children and adults was seen in elevated thresholds in children in quiet and in all masked conditions. For both age groups, masking was greater with the speech-noise versus speech competitor and with two versus one competitor(s). Masking was also greater when the competitors were located in front compared with the right. The amount of masking did not differ across the two age groups. Spatial release from masking was similar in the two age groups, except for in the one-speech condition, when it was greater in children than adults. These findings suggest that, similar to adults, young children are able to utilize spatial and/or head shadow cues to segregate sounds in noisy environments. The potential utility of the measures used here for studying hearing-impaired children is also discussed.


Journal of the Acoustical Society of America | 2004

The role of head-induced interaural time and level differences in the speech reception threshold for multiple interfering sound sources

John Francis Culling; Monica L. Hawley; Ruth Y. Litovsky

Three experiments investigated the roles of interaural time differences (ITDs) and level differences (ILDs) in spatial unmasking in multi-source environments. In experiment 1, speech reception thresholds (SRTs) were measured in virtual-acoustic simulations of an anechoic environment with three interfering sound sources of either speech or noise. The target source lay directly ahead, while three interfering sources were (1) all at the targets location (0 degrees,0 degrees,0 degrees), (2) at locations distributed across both hemifields (-30 degrees,60 degrees,90 degrees), (3) at locations in the same hemifield (30 degrees,60 degrees,90 degrees), or (4) co-located in one hemifield (90 degrees,90 degrees,90 degrees). Sounds were convolved with head-related impulse responses (HRIRs) that were manipulated to remove individual binaural cues. Three conditions used HRIRs with (1) both ILDs and ITDs, (2) only ILDs, and (3) only ITDs. The ITD-only condition produced the same pattern of results across spatial configurations as the combined cues, but with smaller differences between spatial configurations. The ILD-only condition yielded similar SRTs for the (-30 degrees,60 degrees,90 degrees) and (0 degrees,0 degrees,0 degrees) configurations, as expected for best-ear listening. In experiment 2, pure-tone BMLDs were measured at third-octave frequencies against the ITD-only, speech-shaped noise interferers of experiment 1. These BMLDs were 4-8 dB at low frequencies for all spatial configurations. In experiment 3, SRTs were measured for speech in diotic, speech-shaped noise. Noises were filtered to reduce the spectrum level at each frequency according to the BMLDs measured in experiment 2. SRTs were as low or lower than those of the corresponding ITD-only conditions from experiment 1. Thus, an explanation of speech understanding in complex listening environments based on the combination of best-ear listening and binaural unmasking (without involving sound-localization) cannot be excluded.

Collaboration


Dive into the Ruth Y. Litovsky's collaboration.

Top Co-Authors

Avatar

Alan Kan

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Shelly Godar

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Heath G. Jones

University of Colorado Denver

View shared research outputs
Top Co-Authors

Avatar

Sara Misurelli

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tanvi Thakkar

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Gary L. Jones

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matthew Winn

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge