Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Antje Ihlefeld is active.

Publication


Featured researches published by Antje Ihlefeld.


Journal of the Acoustical Society of America | 2011

Effect of source spectrum on sound localization in an everyday reverberant room

Antje Ihlefeld; Barbara G. Shinn-Cunningham

Two experiments explored how frequency content impacts sound localization for sounds containing reverberant energy. Virtual sound sources from thirteen lateral angles and four distances were simulated in the frontal horizontal plane using binaural room impulse responses measured in an everyday office. Experiment 1 compared localization judgments for one-octave-wide noise centered at either 750 Hz (low) or 6000 Hz (high). For both band-limited noises, perceived lateral angle varied monotonically with source angle. For frontal sources, perceived locations were similar for low- and high-frequency noise; however, for lateral sources, localization was less accurate for low-frequency noise than for high-frequency noise. With increasing source distance, judgments of both noises became more biased toward the median plane, an effect that was greater for low-frequency noise than for high-frequency noise. In Experiment 2, simultaneous presentation of low- and high-frequency noises yielded performance that was less accurate than that for high-frequency noise, but equal to or better than for low-frequency noise. Results suggest that listeners perceptually weight low-frequency information heavily, even in reverberant conditions where high-frequency stimuli are localized more accurately. These findings show that listeners do not always optimally adjust how localization cues are integrated over frequency in reverberant settings.


PLOS ONE | 2012

Interaural Level Differences Do Not Suffice for Restoring Spatial Release from Masking in Simulated Cochlear Implant Listening

Antje Ihlefeld; Ruth Y. Litovsky

Spatial release from masking refers to a benefit for speech understanding. It occurs when a target talker and a masker talker are spatially separated. In those cases, speech intelligibility for target speech is typically higher than when both talkers are at the same location. In cochlear implant listeners, spatial release from masking is much reduced or absent compared with normal hearing listeners. Perhaps this reduced spatial release occurs because cochlear implant listeners cannot effectively attend to spatial cues. Three experiments examined factors that may interfere with deploying spatial attention to a target talker masked by another talker. To simulate cochlear implant listening, stimuli were vocoded with two unique features. First, we used 50-Hz low-pass filtered speech envelopes and noise carriers, strongly reducing the possibility of temporal pitch cues; second, co-modulation was imposed on target and masker utterances to enhance perceptual fusion between the two sources. Stimuli were presented over headphones. Experiments 1 and 2 presented high-fidelity spatial cues with unprocessed and vocoded speech. Experiment 3 maintained faithful long-term average interaural level differences but presented scrambled interaural time differences with vocoded speech. Results show a robust spatial release from masking in Experiments 1 and 2, and a greatly reduced spatial release in Experiment 3. Faithful long-term average interaural level differences were insufficient for producing spatial release from masking. This suggests that appropriate interaural time differences are necessary for restoring spatial release from masking, at least for a situation where there are few viable alternative segregation cues.


Frontiers in Systems Neuroscience | 2014

Across-frequency combination of interaural time difference in bilateral cochlear implant listeners.

Antje Ihlefeld; Alan Kan; Ruth Y. Litovsky

The current study examined how cochlear implant (CI) listeners combine temporally interleaved envelope-ITD information across two sites of stimulation. When two cochlear sites jointly transmit ITD information, one possibility is that CI listeners can extract the most reliable ITD cues available. As a result, ITD sensitivity would be sustained or enhanced compared to single-site stimulation. Alternatively, mutual interference across multiple sites of ITD stimulation could worsen dual-site performance compared to listening to the better of two electrode pairs. Two experiments used direct stimulation to examine how CI users can integrate ITDs across two pairs of electrodes. Experiment 1 tested ITD discrimination for two stimulation sites using 100-Hz sinusoidally modulated 1000-pps-carrier pulse trains. Experiment 2 used the same stimuli ramped with 100 ms windows, as a control condition with minimized onset cues. For all stimuli, performance improved monotonically with increasing modulation depth. Results show that when CI listeners are stimulated with electrode pairs at two cochlear sites, sensitivity to ITDs was similar to that seen when only the electrode pair with better sensitivity was activated. None of the listeners showed a decrement in performance from the worse electrode pair. This could be achieved either by listening to the better electrode pair or by truly integrating the information across cochlear sites.


Journal of the Acoustical Society of America | 2004

Effect of source location and listener location on ILD cues in a reverberant room

Antje Ihlefeld; Barbara G. Shinn-Cunningham

Short‐term interaural level differences (ILDs) were analyzed for simulations of the signals that would reach a listener in a reverberant room. White noise was convolved with manikin head‐related impulse responses measured in a classroom to simulate different locations of the source relative to the manikin and different manikin positions in the room. The ILDs of the signals were computed within each third‐octave band over a relatively short time window to investigate how reliably ILD cues encode source laterality. Overall, the mean of the ILD magnitude increases with lateral angle and decreases with distance, as expected. Increasing reverberation decreases the mean ILD magnitude and increases the variance of the short‐term ILD, so that the spatial information carried by ILD cues is degraded by reverberation. These results suggest that the mean ILD is not a reliable cue for determining source laterality in a reverberant room. However, by taking into account both the mean and variance, the distribution of hi...


Trends in hearing | 2016

Developmental Conductive Hearing Loss Reduces Modulation Masking Release

Antje Ihlefeld; Yi Wen Chen; Dan H. Sanes

Hearing-impaired individuals experience difficulties in detecting or understanding speech, especially in background sounds within the same frequency range. However, normally hearing (NH) human listeners experience less difficulty detecting a target tone in background noise when the envelope of that noise is temporally gated (modulated) than when that envelope is flat across time (unmodulated). This perceptual benefit is called modulation masking release (MMR). When flanking masker energy is added well outside the frequency band of the target, and comodulated with the original modulated masker, detection thresholds improve further (MMR+). In contrast, if the flanking masker is antimodulated with the original masker, thresholds worsen (MMR−). These interactions across disparate frequency ranges are thought to require central nervous system (CNS) processing. Therefore, we explored the effect of developmental conductive hearing loss (CHL) in gerbils on MMR characteristics, as a test for putative CNS mechanisms. The detection thresholds of NH gerbils were lower in modulated noise, when compared with unmodulated noise. The addition of a comodulated flanker further improved performance, whereas an antimodulated flanker worsened performance. However, for CHL-reared gerbils, all three forms of masking release were reduced when compared with NH animals. These results suggest that developmental CHL impairs both within- and across-frequency processing and provide behavioral evidence that CNS mechanisms are affected by a peripheral hearing impairment.


Archive | 2007

Neural and Behavioral Sensitivities to Azimuth Degrade with Distance in Reverberant Environments

Sasha Devore; Antje Ihlefeld; Barbara G. Shinn-Cunningham; Bertrand Delgutte

Reverberation poses a challenge to sound localization in rooms. In an anechoic space, the only energy reaching a listener’s ears arrives directly from the sound source. In reverberant environments, however, acoustic reflections interfere with the direct sound and distort the ongoing directional cues, leading to fluctuations in interaural time and level differences (ITD and ILD) over the course of the stimulus (Shinn-Cunningham et al. 2005). These effects become more severe as the distance from sound source to listener increases, which causes the ratio of direct to reverberant energy (D/R) to decrease (Hartmann et al. 2005; Shinn-Cunningham et al. 2005). Few neurophysiological and psychophysical studies have systematically examined sensitivity to sound source azimuth as a function of D/R (Rakerd and Hartmann 2005). Here we report the results of two closely-integrated studies aimed at characterizing the influence of acoustic reflections like those present in typical classrooms on both the directional sensitivity of auditory neurons and the localization performance of human listeners. We used low-frequency stimuli to emphasize ITDs, which are the most important binaural cue for sounds containing low-frequency energy (MacPherson and Middlebrooks 2002; Wightman and Kistler 1992). We find that reverberation reduces the directional sensitivity of low-frequency, ITD-sensitive neurons in the cat inferior colliculus (IC), and that this degradation becomes more severe with decreasing D/R (increasing distance). We show parallel degradations in human sensitivity to the azimuth of low-frequency noise.


Journal of the Acoustical Society of America | 2009

The intelligibility of pointillistic speech

Gerald Kidd; Timothy Streeter; Antje Ihlefeld; Ross K. Maddox; Christine R. Mason

A form of processed speech is described that is highly discriminable in a closed-set identification format. The processing renders speech into a set of sinusoidal pulses played synchronously across frequency. The processing and results from several experiments are described. The number and width of frequency analysis channels and tone-pulse duration were variables. In one condition, various proportions of the tones were randomly removed. The processed speech was remarkably resilient to these manipulations. This type of speech may be useful for examining multitalker listening situations in which a high degree of stimulus control is required.


bioRxiv | 2018

Spatial Release from Masking: Evidence from Near Infrared Spectroscopy

Antje Ihlefeld; Min Zhang

Informational masking (IM) can greatly reduce speech intelligibility, but the neural mechanisms underlying IM are not understood. Binaural differences between target and masker can improve speech perception. In general, improvement in masked speech intelligibility due to provision of spatial cues is called spatial release from masking. Here, we focused on an aspect of spatial release from masking, specifically, the role of spatial attention. We hypothesized that in a situation with IM background sound 1) attention to speech recruits lateral frontal cortex (LFCx), and 2) LFCx activity varies with direction of spatial attention. Using functional near infrared spectroscopy (fNIRS), we assessed LFCx activity bilaterally in normal-hearing listeners. In experiment 1, two talkers were simultaneously presented. Listeners either attended to the target talker (speech task) or they listened passively to an unintelligible, scrambled version of the acoustic mixture (control task). Target and masker differed in pitch and interaural time difference (ITD). Relative to the passive control, LFCx activity increased during attentive listening. Experiment 2 measured how LFCx activity varied with ITD, by testing listeners on the speech task in experiment 1, except that talkers either were spatially separated by ITD or co-located. Results show that directing of auditory attention activates LFCx bilaterally. Moreover, right LFCx is recruited more strongly in the spatially separated as compared with co-located configurations. Findings hint that LFCx function contributes to spatial release from masking in situations with IM.


bioRxiv | 2018

Human Sound Localization Depends on Sound Intensity: Implications for Sensory Coding

Antje Ihlefeld; Nima Alamatsaz; Robert Shapley

A fundamental question of human perception is how we perceive target locations in space. Through our eyes and skin, the activation patterns of sensory organs provide rich spatial cues. However, for other sensory dimensions, including sound localization and visual depth perception, spatial locations must be computed by the brain. For instance, interaural time differences (ITDs) of the sounds reaching the ears allow listeners to localize sound in the horizontal plane. Our experiments tested two prevalent theories on how ITDs affect human sound localization: 1) the labelled-line model, encoding space through tuned representations of spatial location; versus 2) the hemispheric-difference model, representing space through spike-rate distances relative to a perceptual anchor. Unlike the labelled-line model, the hemispheric-difference model predicts that with decreasing intensity, sound localization should collapse toward midline reference, and this is what we observed behaviorally. These findings cast doubt on models of human sound localization that rely on a spatially tuned map. Moreover, analogous experimental results in vision indicate that perceived depth depends upon the contrast of the target. Based on our findings, we propose that the brain uses a canonical computation of location across sensory modalities: perceived location is encoded through population spike rate relative to baseline.


Journal of the Acoustical Society of America | 2018

The role of central processing in modulation masking release

Nima Alamatsaz; Antje Ihlefeld

When background sound is present, hearing impaired (HI) individuals and cochlear-implant (CI) listeners typically are worse at hearing out target sound as compared to normal-hearing (NH) listeners. This perceptual deficit occurs both when the background consists of noise that fluctuates over time (“modulated”) and for stationary background noise (“unmodulated”). In addition, the difference in thresholds between tone detection in modulated and unmodulated noise, referred to as modulation masking release (MMR), is much reduced or absent in HI and CI as compared to NH. Both peripheral and central processing mechanisms contribute to MMR. We previously showed that central MMR is reduced in human CI listeners, and that sound deprivation reduces central MMR in Mongolian gerbils. Here, we began to explore the neurobiological basis of central MMR. NH gerbils were trained to hear out target tones (1 kHz) in modulated (10-Hz rectangularly gated) versus unmodulated bandlimited background noise, and chronically implanted with recording electrodes in core auditory cortex. Neural discharge was analyzed as a function of the broadband energy ratio between target and background sound to determine how different types of background sound affect neural information transmission in awake behaving gerbil. Preliminary results will be discussed in the context of how hearing loss may affect central MMR.When background sound is present, hearing impaired (HI) individuals and cochlear-implant (CI) listeners typically are worse at hearing out target sound as compared to normal-hearing (NH) listeners. This perceptual deficit occurs both when the background consists of noise that fluctuates over time (“modulated”) and for stationary background noise (“unmodulated”). In addition, the difference in thresholds between tone detection in modulated and unmodulated noise, referred to as modulation masking release (MMR), is much reduced or absent in HI and CI as compared to NH. Both peripheral and central processing mechanisms contribute to MMR. We previously showed that central MMR is reduced in human CI listeners, and that sound deprivation reduces central MMR in Mongolian gerbils. Here, we began to explore the neurobiological basis of central MMR. NH gerbils were trained to hear out target tones (1 kHz) in modulated (10-Hz rectangularly gated) versus unmodulated bandlimited background noise, and chronically implan...

Collaboration


Dive into the Antje Ihlefeld's collaboration.

Top Co-Authors

Avatar

Robert P. Carlyon

Cognition and Brain Sciences Unit

View shared research outputs
Top Co-Authors

Avatar

Ruth Y. Litovsky

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Nima Alamatsaz

New Jersey Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Alan Kan

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Bertrand Delgutte

Massachusetts Eye and Ear Infirmary

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tyler H. Churchill

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Min Zhang

New Jersey Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge