Mounya Elhilali
Johns Hopkins University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mounya Elhilali.
Current Opinion in Neurobiology | 2007
Jonathan B. Fritz; Mounya Elhilali; Stephen V. David; Shihab A. Shamma
Some fifty years after the first physiological studies of auditory attention, the field is now ripening, with exciting recent insights into the psychophysics, psychology, and neural basis of auditory attention. Current research seeks to unravel the complex interactions of pre-attentive and attentive processing of the acoustic scene, the role of auditory attention in mediating receptive-field plasticity in both auditory spatial and auditory feature processing, the contrasts and parallels between auditory and visual attention pathways and mechanisms, the interplay of bottom-up and top-down attentional mechanisms, the influential role of attention, goals, and expectations in shaping auditory processing, and the orchestration of diverse attentional effects at multiple levels from the cochlea to the cortex.
Trends in Neurosciences | 2011
Shihab A. Shamma; Mounya Elhilali; Christophe Micheyl
Humans and other animals can attend to one of multiple sounds and follow it selectively over time. The neural underpinnings of this perceptual feat remain mysterious. Some studies have concluded that sounds are heard as separate streams when they activate well-separated populations of central auditory neurons, and that this process is largely pre-attentive. Here, we argue instead that stream formation depends primarily on temporal coherence between responses that encode various features of a sound source. Furthermore, we postulate that only when attention is directed towards a particular feature (e.g. pitch) do all other temporally coherent features of that source (e.g. timbre and location) become bound together as a stream that is segregated from the incoherent features of other sources.
The Journal of Neuroscience | 2005
Jonathan B. Fritz; Mounya Elhilali; Shihab A. Shamma
Auditory experience leads to myriad changes in processing in the central auditory system. We recently described task-related plasticity characterized by rapid modulation of spectro-temporal receptive fields (STRFs) in ferret primary auditory cortex (A1) during tone detection. We conjectured that each acoustic task may have its own “signature” STRF changes, dependent on the salient cues that the animal must attend to perform the task. To discover whether other acoustic tasks could elicit changes in STRF shape, we recorded from A1 in ferrets also trained on a frequency discrimination task. Overall, we found a distinct pattern of STRF change, characterized by an expected selective enhancement at target tone frequency but also by an equally selective depression at reference tone frequency. When single-tone detection and frequency discrimination tasks were performed sequentially, neurons responded differentially to identical tones, reflecting distinct predictive values of stimuli in the two behavioral contexts. All results were observed in multiunit as well as single-unit recordings. Our findings provide additional evidence for the presence of adaptive neuronal responses in A1 that can swiftly change to reflect both sensory content and the changing behavioral meaning of incoming acoustic stimuli.
Speech Communication | 2003
Mounya Elhilali; Taishih Chi; Shihab A. Shamma
Abstract We present a biologically motivated method for assessing the intelligibility of speech recorded or transmitted under various types of distortions. The method employs an auditory model to analyze the effects of noise, reverberations, and other distortions on the joint spectro-temporal modulations present in speech, and on the ability of a channel to transmit these modulations. The effects are summarized by a spectro-temporal modulation index (STMI). The index is validated by comparing its predictions to those of the classical STI and to error rates reported by human subjects listening to speech contaminated with combined noise and reverberation. We further demonstrate that the STMI can handle difficult and nonlinear distortions such as phase-jitter and shifts, to which the STI is not sensitive.
Hearing Research | 2005
Jonathan B. Fritz; Mounya Elhilali; Shihab A. Shamma
Listening is an active process in which attentive focus on salient acoustic features in auditory tasks can influence receptive field properties of cortical neurons. Recent studies showing rapid task-related changes in neuronal spectrotemporal receptive fields (STRFs) in primary auditory cortex of the behaving ferret are reviewed in the context of current research on cortical plasticity. Ferrets were trained on spectral tasks, including tone detection and two-tone discrimination, and on temporal tasks, including gap detection and click-rate discrimination. STRF changes could be measured on-line during task performance and occurred within minutes of task onset. During spectral tasks, there were specific spectral changes (enhanced response to tonal target frequency in tone detection and discrimination, suppressed response to tonal reference frequency in tone discrimination). However, only in the temporal tasks, the STRF was changed along the temporal dimension by sharpening temporal dynamics. In ferrets trained on multiple tasks, distinctive and task-specific STRF changes could be observed in the same cortical neurons in successive behavioral sessions. These results suggest that rapid task-related plasticity is an ongoing process that occurs at a network and single unit level as the animal switches between different tasks and dynamically adapts cortical STRFs in response to changing acoustic demands.
The Journal of Neuroscience | 2004
Mounya Elhilali; Jonathan B. Fritz; David J. Klein; Jonathan Z. Simon; Shihab A. Shamma
Although single units in primary auditory cortex (A1) exhibit accurate timing in their phasic response to the onset of sound (precision of a few milliseconds), paradoxically, they are unable to sustain synchronized responses to repeated stimuli at rates much beyond 20 Hz. To explore the relationship between these two aspects of cortical response, we designed a broadband stimulus with a slowly modulated spectrotemporal envelope riding on top of a rapidly modulated waveform (or fine structure). Using this stimulus, we quantified the ability of cortical cells to encode independently and simultaneously the stimulus envelope and fine structure. Specifically, by reverse-correlating unit responses with these two stimulus dimensions, we measured the spectrotemporal response fields (STRFs) associated with the processing of the envelope, the fine structure, and the complete stimulus. A1 cells respond well to the slow spectrotemporal envelopes and produce a wide variety of STRFs. In over 70% of cases, A1 units also track the fine-structure modulations precisely, throughout the stimulus, and for frequencies up to several hundred Hertz. Such a dual response, however, is contingent on the cell being driven by both fast and slow modulations, in that the response to the slowly modulated envelope gates the expression of the fine structure. We also demonstrate that either a simplified model of synaptic depression and facilitation, and/or a cortical network of thalamic excitation and cortical inhibition can account for major trends in the observed findings. Finally, we discuss the potential functional significance and perceptual relevance of these coexistent, complementary dynamic response modes.
Neuron | 2009
Serin Atiani; Mounya Elhilali; Stephen V. David; Jonathan B. Fritz; Shihab A. Shamma
Attention is essential for navigating complex acoustic scenes, when the listener seeks to extract a foreground source while suppressing background acoustic clutter. This study explored the neural correlates of this perceptual ability by measuring rapid changes of spectrotemporal receptive fields (STRFs) in primary auditory cortex during detection of a target tone embedded in noise. Compared with responses in the passive state, STRF gain decreased during task performance in most cells. By contrast, STRF shape changes were excitatory and specific, and were strongest in cells with best frequencies near the target tone. The net effect of these adaptations was to accentuate the representation of the target tone relative to the noise by enhancing responses of near-target cells to the tone during high-signal-to-noise ratio (SNR) tasks while suppressing responses of far-from-target cells to the masking noise in low-SNR tasks. These adaptive STRF changes were largest in high-performance sessions, confirming a close correlation with behavior.
PLOS Biology | 2009
Mounya Elhilali; Juanjuan Xiang; Shihab A. Shamma; Jonathan Z. Simon
Bottom-up (stimulus-driven) and top-down (attentional) processes interact when a complex acoustic scene is parsed. Both modulate the neural representation of the target in a manner strongly correlated with behavioral performance.
Journal of the Acoustical Society of America | 2008
Mounya Elhilali; Shihab A. Shamma
Sound systems and speech technologies can benefit greatly from a deeper understanding of how the auditory system, and particularly the auditory cortex, is able to parse complex acoustic scenes into meaningful auditory objects and streams under adverse conditions. In the current work, a biologically plausible model of this process is presented, where the role of cortical mechanisms in organizing complex auditory scenes is explored. The model consists of two stages: (i) a feature analysis stage that maps the acoustic input into a multidimensional cortical representation and (ii) an integrative stage that recursively builds up expectations of how streams evolve over time and reconciles its predictions with the incoming sensory input by sorting it into different clusters. This approach yields a robust computational scheme for speaker separation under conditions of speech or music interference. The model can also emulate the archetypal streaming percepts of tonal stimuli that have long been tested in human subjects. The implications of this model are discussed with respect to the physiological correlates of streaming in the cortex as well as the role of attention and other top-down influences in guiding sound organization.
The Journal of Neuroscience | 2007
Mounya Elhilali; Jonathan B. Fritz; Taishih Chi; Shihab A. Shamma
To form a reliable, consistent, and accurate representation of the acoustic scene, a reasonable conjecture is that cortical neurons maintain stable receptive fields after an early period of developmental plasticity. However, recent studies suggest that cortical neurons can be modified throughout adulthood and may change their response properties quite rapidly to reflect changing behavioral salience of certain sensory features. Because claims of adaptive receptive field plasticity could be confounded by intrinsic, labile properties of receptive fields themselves, we sought to gauge spontaneous changes in the responses of auditory cortical neurons. In the present study, we examined changes in a series of spectrotemporal receptive fields (STRFs) gathered from single neurons in successive recordings obtained over time scales of 30–120 min in primary auditory cortex (A1) in the quiescent, awake ferret. We used a global analysis of STRF shape based on a large database of A1 receptive fields. By clustering this STRF space in a data-driven manner, STRF sequences could be classified as stable or labile. We found that >73% of A1 neurons exhibited stable receptive field attributes over these time scales. In addition, we found that the extent of intrinsic variation in STRFs during the quiescent state was insignificant compared with behaviorally induced STRF changes observed during performance of spectral auditory tasks. Our results confirm that task-related changes induced by attentional focus on specific acoustic features were indeed confined to behaviorally salient acoustic cues and could be convincingly attributed to learning-induced plasticity when compared with “spontaneous” receptive field variability.