Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Elin Roverud is active.

Publication


Featured researches published by Elin Roverud.


Journal of the Acoustical Society of America | 2010

The time course of cochlear gain reduction measured using a more efficient psychophysical technique

Elin Roverud; Elizabeth A. Strickland

In a previous study it was shown that an on-frequency precursor intended to activate the medial olivocochlear reflex (MOCR) at the signal frequency reduces the gain estimated from growth-of-masking (GOM) functions. This is called the temporal effect (TE). In Expt. 1 a shorter method of measuring this change in gain is established. GOM functions were measured with an on- and off-frequency precursor presented before the masker and signal, and used to estimate Input/Output functions. The change in gain estimated in this way was very similar to that estimated from comparing two points measured with a single fixed masker level on the lower legs of the GOM functions. In Expt. 2, the TE was measured as a function of precursor duration and signal delay. For short precursor durations and short delays the TE increased (buildup) or remained constant as delay increased, then decreased. The TE also increased with precursor duration for the shortest delay. The results were fitted with a model based on the time course of the MOCR. The model fitted the data well, and predicted the buildup. This buildup is not consistent with exponential decay predicted by neural adaptation or persistence of excitation.


Journal of the Acoustical Society of America | 2016

Determining the energetic and informational components of speech-on-speech masking

Gerald Kidd; Christine R. Mason; Jayaganesh Swaminathan; Elin Roverud; Kameron K. Clayton; Virginia Best

Identification of target speech was studied under masked conditions consisting of two or four independent speech maskers. In the reference conditions, the maskers were colocated with the target, the masker talkers were the same sex as the target, and the masker speech was intelligible. The comparison conditions, intended to provide release from masking, included different-sex target and masker talkers, time-reversal of the masker speech, and spatial separation of the maskers from the target. Significant release from masking was found for all comparison conditions. To determine whether these reductions in masking could be attributed to differences in energetic masking, ideal time-frequency segregation (ITFS) processing was applied so that the time-frequency units where the masker energy dominated the target energy were removed. The remaining target-dominated “glimpses” were reassembled as the stimulus. Speech reception thresholds measured using these resynthesized ITFS-processed stimuli were the same for the reference and comparison conditions supporting the conclusion that the amount of energetic masking across conditions was the same. These results indicated that the large release from masking found under all comparison conditions was due primarily to a reduction in informational masking. Furthermore, the large individual differences observed generally were correlated across the three masking release conditions.


The Journal of Neuroscience | 2016

Role of Binaural Temporal Fine Structure and Envelope Cues in Cocktail-Party Listening.

Jayaganesh Swaminathan; Christine R. Mason; Timothy Streeter; Virginia Best; Elin Roverud; Gerald Kidd

While conversing in a crowded social setting, a listener is often required to follow a target speech signal amid multiple competing speech signals (the so-called “cocktail party” problem). In such situations, separation of the target speech signal in azimuth from the interfering masker signals can lead to an improvement in target intelligibility, an effect known as spatial release from masking (SRM). This study assessed the contributions of two stimulus properties that vary with separation of sound sources, binaural envelope (ENV) and temporal fine structure (TFS), to SRM in normal-hearing (NH) human listeners. Target speech was presented from the front and speech maskers were either colocated with or symmetrically separated from the target in azimuth. The target and maskers were presented either as natural speech or as “noise-vocoded” speech in which the intelligibility was conveyed only by the speech ENVs from several frequency bands; the speech TFS within each band was replaced with noise carriers. The experiments were designed to preserve the spatial cues in the speech ENVs while retaining/eliminating them from the TFS. This was achieved by using the same/different noise carriers in the two ears. A phenomenological auditory-nerve model was used to verify that the interaural correlations in TFS differed across conditions, whereas the ENVs retained a high degree of correlation, as intended. Overall, the results from this study revealed that binaural TFS cues, especially for frequency regions below 1500 Hz, are critical for achieving SRM in NH listeners. Potential implications for studying SRM in hearing-impaired listeners are discussed. SIGNIFICANCE STATEMENT Acoustic signals received by the auditory system pass first through an array of physiologically based band-pass filters. Conceptually, at the output of each filter, there are two principal forms of temporal information: slowly varying fluctuations in the envelope (ENV) and rapidly varying fluctuations in the temporal fine structure (TFS). The importance of these two types of information in everyday listening (e.g., conversing in a noisy social situation; the “cocktail-party” problem) has not been established. This study assessed the contributions of binaural ENV and TFS cues for understanding speech in multiple-talker situations. Results suggest that, whereas the ENV cues are important for speech intelligibility, binaural TFS cues are critical for perceptually segregating the different talkers and thus for solving the cocktail party problem.


Journal of the Acoustical Society of America | 2014

Accounting for nonmonotonic precursor duration effects with gain reduction in the temporal window model.

Elin Roverud; Elizabeth A. Strickland

The mechanisms of forward masking are not clearly understood. The temporal window model (TWM) proposes that masking occurs via a neural mechanism that integrates within a temporal window. The medial olivocochlear reflex (MOCR), a sound-evoked reflex that reduces cochlear amplifier gain, may also contribute to forward masking if the preceding sound reduces gain for the signal. Psychophysical evidence of gain reduction can be observed using a growth of masking (GOM) paradigm with an off-frequency forward masker and a precursor. The basilar membrane input/output (I/O) function is estimated from the GOM function, and the I/O function gain is reduced by the precursor. In this study, the effect of precursor duration on this gain reduction effect was examined for on- and off-frequency precursors. With on-frequency precursors, thresholds increased with increasing precursor duration, then decreased (rolled over) for longer durations. Thresholds with off-frequency precursors continued to increase with increasing precursor duration. These results are not consistent with solely neural masking, but may reflect gain reduction that selectively affects on-frequency stimuli. The TWM was modified to include history-dependent gain reduction to simulate the MOCR, called the temporal window model-gain reduction (TWM-GR). The TWM-GR predicted rollover and the differences with on- and off-frequency precursors whereas the TWM did not.


Trends in hearing | 2017

The Benefit of a Visually Guided Beamformer in a Dynamic Speech Task

Virginia Best; Elin Roverud; Timothy Streeter; Christine R. Mason; Gerald Kidd

The aim of this study was to evaluate the performance of a visually guided hearing aid (VGHA) under conditions designed to capture some aspects of “real-world” communication settings. The VGHA uses eye gaze to steer the acoustic look direction of a highly directional beamforming microphone array. Although the VGHA has been shown to enhance speech intelligibility for fixed-location, frontal targets, it is currently not known whether these benefits persist in the face of frequent changes in location of the target talker that are typical of conversational turn-taking. Participants were 14 young adults, 7 with normal hearing and 7 with bilateral sensorineural hearing impairment. Target stimuli were sequences of 12 question–answer pairs that were embedded in a mixture of competing conversations. The participant’s task was to respond via a key press after each answer indicating whether it was correct or not. Spatialization of the stimuli and microphone array processing were done offline using recorded impulse responses, before presentation over headphones. The look direction of the array was steered according to the eye movements of the participant as they followed a visual cue presented on a widescreen monitor. Performance was compared for a “dynamic” condition in which the target stimulus moved between three locations, and a “fixed” condition with a single target location. The benefits of the VGHA over natural binaural listening observed in the fixed condition were reduced in the dynamic condition, largely because visual fixation was less accurate.


Trends in hearing | 2016

A Flexible Question-and-Answer Task for Measuring Speech Understanding

Virginia Best; Timothy Streeter; Elin Roverud; Christine R. Mason; Gerald Kidd

This report introduces a new speech task based on simple questions and answers. The task differs from a traditional sentence recall task in that it involves an element of comprehension and can be implemented in an ongoing fashion. It also contains two target items (the question and the answer) that may be associated with different voices and locations to create dynamic listening scenarios. A set of 227 questions was created, covering six broad categories (days of the week, months of the year, numbers, colors, opposites, and sizes). All questions and their one-word answers were spoken by 11 female and 11 male talkers. In this study, listeners were presented with question-answer pairs and asked to indicate whether the answer was true or false. Responses were given as simple button or key presses, which are quick to make and easy to score. Two preliminary experiments are presented that illustrate different ways of implementing the basic task. In the first experiment, question-answer pairs were presented in speech-shaped noise, and performance was compared across subjects, question categories, and time, to examine the different sources of variability. In the second experiment, sequences of question-answer pairs were presented amidst competing conversations in an ongoing, spatially dynamic listening scenario. Overall, the question-and-answer task appears to be feasible and could be implemented flexibly in a number of different ways.


Journal of the Acoustical Society of America | 2017

Examination of a hybrid beamformer that preserves auditory spatial cues

Virginia Best; Elin Roverud; Christine R. Mason; Gerald KiddJr.

A hearing-aid strategy that combines a beamforming microphone array in the high frequencies with natural binaural signals in the low frequencies was examined. This strategy attempts to balance the benefits of beamforming (improved signal-to-noise ratio) with the benefits of binaural listening (spatial awareness and location-based segregation). The crossover frequency was varied from 200 to 1200 Hz, and performance was compared to full-spectrum binaural and beamformer conditions. Speech intelligibility in the presence of noise or competing speech was measured in listeners with and without hearing loss. Results showed that the optimal crossover frequency depended on the listener and the nature of the interference.


Trends in hearing | 2016

Informational Masking in Normal-Hearing and Hearing-Impaired Listeners Measured in a Nonspeech Pattern Identification Task

Elin Roverud; Virginia Best; Christine R. Mason; Jayaganesh Swaminathan; Gerald Kidd

Individuals with sensorineural hearing loss (SNHL) often experience more difficulty with listening in multisource environments than do normal-hearing (NH) listeners. While the peripheral effects of sensorineural hearing loss certainly contribute to this difficulty, differences in central processing of auditory information may also contribute. To explore this issue, it is important to account for peripheral differences between NH and these hearing-impaired (HI) listeners so that central effects in multisource listening can be examined. In the present study, NH and HI listeners performed a tonal pattern identification task at two distant center frequencies (CFs), 850 and 3500 Hz. In an attempt to control for differences in the peripheral representations of the stimuli, the patterns were presented at the same sensation level (15 dB SL), and the frequency deviation of the tones comprising the patterns was adjusted to obtain equal quiet pattern identification performance across all listeners at both CFs. Tonal sequences were then presented at both CFs simultaneously (informational masking conditions), and listeners were asked either to selectively attend to a source (CF) or to divide attention between CFs and identify the pattern at a CF designated after each trial. There were large differences between groups in the frequency deviations necessary to perform the pattern identification task. After compensating for these differences, there were small differences between NH and HI listeners in the informational masking conditions. HI listeners showed slightly greater performance asymmetry between the low and high CFs than did NH listeners, possibly due to central differences in frequency weighting between groups.


Journal of the Acoustical Society of America | 2016

Evaluating performance of hearing-impaired listeners with a visually-guided hearing aid in an audio-visual word congruence task

Elin Roverud; Virginia Best; Christine R. Mason; Timothy Streeter; Gerald Kidd

Hearing-impaired (HI) individuals typically experience greater difficulty listening selectively to a target talker in speech mixtures than do normal-hearing (NH) listeners. To assist HI listeners in these situations, the benefit of a visually guided hearing aid (VGHA)—highly directional amplification created by acoustic beamforming steered by eye gaze—is being evaluated. In past work with the VGHA, performance of NH listeners was assessed on an audio-visual word congruence task [e.g., Roverud et al., ARO 2015]. The present study extends this work to HI listeners. Three spoken words are presented simultaneously under headphones at each of three source locations separated in azimuth. A single word is printed on a monitor at an angle corresponding to one of the locations, indicating the auditory target word location. After each stimulus, listeners indicate whether the auditory target matches the printed word with a YES-NO response, ignoring the two masker words. Listeners visually track the printed word, whi...


Trends in hearing | 2018

A “Buildup” of Speech Intelligibility in Listeners With Normal Hearing and Hearing Loss

Virginia Best; Jayaganesh Swaminathan; Norbert Kopčo; Elin Roverud; Barbara G. Shinn-Cunningham

The perception of simple auditory mixtures is known to evolve over time. For instance, a common example of this is the “buildup” of stream segregation that is observed for sequences of tones alternating in pitch. Yet very little is known about how the perception of more complicated auditory scenes, such as multitalker mixtures, changes over time. Previous data are consistent with the idea that the ability to segregate a target talker from competing sounds improves rapidly when stable cues are available, which leads to improvements in speech intelligibility. This study examined the time course of this buildup in listeners with normal and impaired hearing. Five simultaneous sequences of digits, varying in length from three to six digits, were presented from five locations in the horizontal plane. A synchronized visual cue at one location indicated which sequence was the target on each trial. We observed a buildup in digit identification performance, driven primarily by reductions in confusions between the target and the maskers, that occurred over the course of three to four digits. Performance tended to be poorer in listeners with hearing loss; however, there was only weak evidence that the buildup was diminished or slowed in this group.

Collaboration


Dive into the Elin Roverud's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Judy R. Dubno

Medical University of South Carolina

View shared research outputs
Top Co-Authors

Avatar

Jayne B. Ahlstrom

Medical University of South Carolina

View shared research outputs
Researchain Logo
Decentralizing Knowledge