Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shihab Shamma is active.

Publication


Featured researches published by Shihab Shamma.


Journal of the Acoustical Society of America | 2013

Temporal coherence versus harmonicity in auditory stream formation

Christophe Micheyl; Heather A. Kreft; Shihab Shamma; Andrew J. Oxenham

This study sought to investigate the influence of temporal incoherence and inharmonicity on concurrent stream segregation, using performance-based measures. Subjects discriminated frequency shifts in a temporally regular sequence of target pure tones, embedded in a constant or randomly varying multi-tone background. Depending on the condition tested, the target tones were either temporally coherent or incoherent with, and either harmonically or inharmonically related to, the background tones. The results provide further evidence that temporal incoherence facilitates stream segregation and they suggest that deviations from harmonicity can cause similar facilitation effects, even when the targets and the maskers are temporally coherent.


eLife | 2017

Detecting changes in dynamic and complex acoustic environments

Yves Boubenec; Jennifer Lawlor; Urszula Górska; Shihab Shamma; Bernhard Englitz

Natural sounds such as wind or rain, are characterized by the statistical occurrence of their constituents. Despite their complexity, listeners readily detect changes in these contexts. We here address the neural basis of statistical decision-making using a combination of psychophysics, EEG and modelling. In a texture-based, change-detection paradigm, human performance and reaction times improved with longer pre-change exposure, consistent with improved estimation of baseline statistics. Change-locked and decision-related EEG responses were found in a centro-parietal scalp location, whose slope depended on change size, consistent with sensory evidence accumulation. The potentials amplitude scaled with the duration of pre-change exposure, suggesting a time-dependent decision threshold. Auditory cortex-related potentials showed no response to the change. A dual timescale, statistical estimation model accounted for subjects performance. Furthermore, a decision-augmented auditory cortex model accounted for performance and reaction times, suggesting that the primary cortical representation requires little post-processing to enable change-detection in complex acoustic environments. DOI: http://dx.doi.org/10.7554/eLife.24910.001


Philosophical Transactions of the Royal Society B | 2017

An auditory illusion reveals the role of streaming in the temporal misallocation of perceptual objects.

Anahita H. Mehta; Nori Jacoby; Ifat Yasin; Andrew J. Oxenham; Shihab Shamma

This study investigates the neural correlates and processes underlying the ambiguous percept produced by a stimulus similar to Deutschs ‘octave illusion’, in which each ear is presented with a sequence of alternating pure tones of low and high frequencies. The same sequence is presented to each ear, but in opposite phase, such that the left and right ears receive a high–low–high … and a low–high–low … pattern, respectively. Listeners generally report hearing the illusion of an alternating pattern of low and high tones, with all the low tones lateralized to one side and all the high tones lateralized to the other side. The current explanation of the illusion is that it reflects an illusory feature conjunction of pitch and perceived location. Using psychophysics and electroencephalogram measures, we test this and an alternative hypothesis involving synchronous and sequential stream segregation, and investigate potential neural correlates of the illusion. We find that the illusion of alternating tones arises from the synchronous tone pairs across ears rather than sequential tones in one ear, suggesting that the illusion involves a misattribution of time across perceptual streams, rather than a misattribution of location within a stream. The results provide new insights into the mechanisms of binaural streaming and synchronous sound segregation. This article is part of the themed issue ‘Auditory and visual scene analysis’.


internaltional ultrasonics symposium | 2016

Functional Ultrasound Imaging of the thalamo-cortical auditory tract in awake ferrets using ultrafast Doppler imaging

Charlie Demene; Célian Bimbard; Marc Gesnik; Susanne Radtke-Schuller; Shihab Shamma; Yves Boubenec; Mickael Tanter

Large-scale functional imaging techniques are part of a fast growing field of neuroscience aiming at understanding whole brain activity. The recently introduced Functional Ultrasound Imaging (fUS), based on ultrafast Doppler, is a new very sensitive method monitoring changes in slow blood flow with a high spatial (~100μm) and temporal (down to the cardiac time scale) resolution for a typical imaged section of 15mm wide and 20mm deep (at 15MHz, typical for animal studies), which makes it an unequalled modality in the landscape or functional imaging. It opened a large field of applications in Neuroimaging from epilepsy to spatial representation. Here we present its use to study the functional organization of auditory cortex and thalamic nuclei in the awake ferret.


IEEE Transactions on Signal Processing | 2016

Rigid Motion Model for Audio Source Separation

Guy Wolf; Stéphane Mallat; Shihab Shamma

We introduce a single channel blind source separation algorithm of audio mixtures. It uses a strategy that is similar to rigid object segregation in videos. A velocity field is defined over the wavelet time-frequency plane. It captures the time evolution of amplitude modulations and harmonic frequencies. Several audio sources are segregated by separating their velocity field with a harmonic rigidity assumption. Signals are then reconstructed from wavelet coefficients in different harmonic templates. The resulting monaural blind source separation is demonstrated on mixtures of speech, singing voice, music, and noise audio signals.


international workshop on machine learning for signal processing | 2014

Audio source separation with time-frequency velocities

Guy Wolf; Stéphane Mallat; Shihab Shamma

Separating complex audio sources from a single measurement channel, with no training data, is highly challenging. We introduce a new approach, which relies on the time dynamics of rigid audio models, based on harmonic templates. The velocity vectors of such models are defined and computed in a time-frequency scalogram calculated with a wavelet transform. Similarly to rigid object segmentation in videos, multiple audio sources are discriminated by approximating their velocity vectors with low-dimensional models. The different audio sources are segmented by optimizing a harmonic template selection, which provides piecewise constant velocity approximations. Numerical experiments give examples of blind source separation from single channel audio signals.


bioRxiv | 2017

Task Engagement Enhances Population Encoding of Stimulus Meaning in Primary Auditory Cortex

Sophie Bagur; Martin Averseng; Diego Elgueda; Stephen V. David; Jonathan B. Fritz; Pingbo Yin; Shihab Shamma; Yves Boubenec; Srdjan Ostojic

The main functions of primary sensory cortical areas are classically considered to be the extraction and representation of stimulus features. In contrast, higher cortical sensory association areas are thought to be responsible for combining these sensory representations with internal motivations and learnt associations. These regions generate appropriate neural responses that are maintained until a motor command is executed. Within this framework, responses of the primary sensory areas during task performance are expected to carry less information about the behavioral meaning of the stimulus than higher sensory, association, motor and frontal cortices. Here we demonstrate instead that the neuronal population responses in the early primary auditory cortex (A1) display many aspects of responses generally associated with higher-level areas. A1 activity was recorded in awake ferrets while they were either passively listening or actively discriminating two periodic click trains of different rates in a Go/No-Go paradigm. By applying population-level dimensionality reduction techniques, we found that task-engagement induced a shift in the nature of the encoding from a sensory-driven representation of the two stimuli to a behaviorally relevant representation of the two categories that specifically enhances the target stimulus. We demonstrate that this shift in encoding relies partly on a novel mechanism of change in spontaneous activity patterns upon engagement in the task. We show that this population-level representation of stimuli in A1 population activity bears strong similarities to responses in the frontal cortex, but appears earlier following stimulus presentation. Analysis of neural activity recorded in various Go/No-Go tasks, with different sounds and reinforcement paradigms, reveals that this striking population-level enhancement of target representation is a general property of task engagement. These findings indicate that primary sensory cortices play a highly flexible role in the processing of incoming stimuli and implement a crucial change in the structure of population activity in order to extract task-relevant information during behavior.


Archive | 2010

Rate Versus Temporal Code? A Spatio-Temporal Coherence Model of the Cortical Basis of Streaming

Mounya Elhilali; Ling Ma; Christophe Micheyl; Andrew J. Oxenham; Shihab Shamma

A better understanding of auditory scene analysis requires uncovering the brain processes that govern the segregation of sound patterns into perceptual streams. Existing models of auditory streaming emphasize tonotopic or “spatial” separation of neural responses as the primary determinant of stream segregation. While partially true, this theory is far from complete. It overlooks the involvement of and interaction between both “sequential” and “simultaneous” grouping mechanisms in the process of scene analysis.


bioRxiv | 2018

Neural Responses in Dorsal Prefrontal Cortex Reflect Proactive Interference during an Auditory Reversal Task

Nikolas A. Francis; Susanne Radtke-Schuller; Jonathan B. Fritz; Shihab Shamma

Task-related plasticity in the brain is triggered by changes in the behavioral meaning of sounds. We investigated plasticity in ferret dorsolateral frontal cortex (dlFC) during an auditory reversal task to study the neural correlates of proactive interference, i.e., perseveration of previously learned behavioral meanings that are no longer task-appropriate. Although the animals learned the task, target recognition decreased after reversals, indicating proactive interference. Frontal cortex responsiveness was consistent with previous findings that dlFC encodes the behavioral meaning of sounds. However, the neural responses observed here were more complex. For example, target responses were strongly enhanced, while responses to non-target tones and noises were weakly enhanced and strongly suppressed, respectively. Moreover, dlFC responsiveness reflected the proactive interference observed in behavior: target responses decreased after reversals, most significantly during incorrect behavioral responses. These findings suggest that the weak representation of behavioral meaning in dlFC may be a neural correlate of proactive interference. Significance Statement Neural activity in prefrontal cortex (PFC) is believed to enable cognitive flexibility during sensory-guided behavior. Since PFC encodes the behavioral meaning of sensory events, we hypothesized that weak representation of behavioral meaning in PFC may limit cognitive flexibility. To test this hypothesis, we recorded neural activity in ferret PFC, while ferrets performed an auditory reversal task in which the behavioral meanings of sounds were reversed during experiments. The reversal task enabled us study PFC responses during proactive interference, i.e. perseveration of previously learned behavioral meanings that are no longer task-appropriate. We found that task performance errors increased after reversals while PFC representation of behavioral meaning diminished. Our findings suggest that proactive interference may occur when PFC forms weak sensory-cognitive associations.


bioRxiv | 2018

Laminar profile of task-related plasticity in ferret primary auditory cortex

Nikolas A. Francis; Diego Elgueda; Bernhard Englitz; Jonathan B. Fritz; Shihab Shamma

Rapid task-related plasticity is a neural correlate of selective attention in primary auditory cortex (A1). Top-down feedback from higher-order cortex may drive task-related plasticity in A1, characterized by enhanced neural representation of behaviorally meaningful sounds during auditory task performance. Since intracortical connectivity is greater within A1 layers 2/3 (L2/3) than in layers 4-6 (L4-6), we hypothesized that enhanced representation of behaviorally meaningful sounds might be greater in A1 L2/3 than L4-6. To test this hypothesis and study the laminar profile of task-related plasticity, we trained 2 ferrets to detect pure tones while we recorded laminar activity across a 1.8 mm depth in A1. In each experiment, we analyzed current-source densities (CSDs), high-gamma local field potentials (LFPs), and multi-unit spiking in response to identical acoustic stimuli during both passive listening and active task performance. We found that neural responses to auditory targets were enhanced during task performance, and target enhancement was greater in L2/3 than in L4-6. Spectrotemporal receptive fields (STRFs) computed from CSDs, high-gamma LFPs, and multi-unit spiking showed similar increases in auditory target selectivity, also greatest in L2/3. Our results suggest that activity within intracortical networks plays a key role in shaping the underlying neural mechanisms of selective attention.

Collaboration


Dive into the Shihab Shamma's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yves Boubenec

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stéphane Mallat

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge