Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chandramouli Chandrasekaran is active.

Publication


Featured researches published by Chandramouli Chandrasekaran.


PLOS Computational Biology | 2009

The Natural Statistics of Audiovisual Speech

Chandramouli Chandrasekaran; Andrea Trubanova; Sébastien Stillittano; Alice Caplier; Asif A. Ghazanfar

Humans, like other animals, are exposed to a continuous stream of signals, which are dynamic, multimodal, extended, and time varying in nature. This complex input space must be transduced and sampled by our sensory systems and transmitted to the brain where it can guide the selection of appropriate actions. To simplify this process, its been suggested that the brain exploits statistical regularities in the stimulus space. Tests of this idea have largely been confined to unimodal signals and natural scenes. One important class of multisensory signals for which a quantitative input space characterization is unavailable is human speech. We do not understand what signals our brain has to actively piece together from an audiovisual speech stream to arrive at a percept versus what is already embedded in the signal structure of the stream itself. In essence, we do not have a clear understanding of the natural statistics of audiovisual speech. In the present study, we identified the following major statistical features of audiovisual speech. First, we observed robust correlations and close temporal correspondence between the area of the mouth opening and the acoustic envelope. Second, we found the strongest correlation between the area of the mouth opening and vocal tract resonances. Third, we observed that both area of the mouth opening and the voice envelope are temporally modulated in the 2–7 Hz frequency range. Finally, we show that the timing of mouth movements relative to the onset of the voice is consistently between 100 and 300 ms. We interpret these data in the context of recent neural theories of speech which suggest that speech communication is a reciprocally coupled, multisensory event, whereby the outputs of the signaler are matched to the neural processes of the receiver.


The Journal of Neuroscience | 2008

Interactions between the superior temporal sulcus and auditory cortex mediate dynamic face/voice integration in rhesus monkeys

Asif A. Ghazanfar; Chandramouli Chandrasekaran; Nk Logothetis

The existence of multiple nodes in the cortical network that integrate faces and voices suggests that they may be interacting and influencing each other during communication. To test the hypothesis that multisensory responses in auditory cortex are influenced by visual inputs from the superior temporal sulcus (STS), an association area, we recorded local field potentials and single neurons from both structures concurrently in monkeys. The functional interactions between the auditory cortex and the STS, as measured by spectral analyses, increased in strength during presentations of dynamic faces and voices relative to either communication signal alone. These interactions were not solely modulations of response strength, because the phase relationships were significantly less variable in the multisensory condition as well. A similar analysis of functional interactions within the auditory cortex revealed no similar interactions as a function of stimulus condition, nor did a control condition in which the dynamic face was replaced with a dynamic disk mimicking mouth movements. Single neuron data revealed that these intercortical interactions were reflected in the spiking output of auditory cortex and that such spiking output was coordinated with oscillations in the STS. The vast majority of single neurons that were responsive to voices showed integrative responses when faces, but not control stimuli, were presented in conjunction. Our data suggest that the integration of faces and voices is mediated at least in part by neuronal cooperation between auditory cortex and the STS and that interactions between these structures are a fast and efficient way of dealing with the multisensory communication signals.


Current Biology | 2008

Integration of Bimodal Looming Signals through Neuronal Coherence in the Temporal Lobe

Joost X. Maier; Chandramouli Chandrasekaran; Asif A. Ghazanfar

The ability to integrate information across multiple sensory systems offers several behavioral advantages, from quicker reaction times and more accurate responses to better detection and more robust learning. At the neural level, multisensory integration requires large-scale interactions between different brain regions--the convergence of information from separate sensory modalities, represented by distinct neuronal populations. The interactions between these neuronal populations must be fast and flexible, so that behaviorally relevant signals belonging to the same object or event can be immediately integrated and integration of unrelated signals can be prevented. Looming signals are a particular class of signals that are behaviorally relevant for animals and that occur in both the auditory and visual domain. These signals indicate the rapid approach of objects and provide highly salient warning cues about impending impact. We show here that multisensory integration of auditory and visual looming signals may be mediated by functional interactions between auditory cortex and the superior temporal sulcus, two areas involved in integrating behaviorally relevant auditory-visual signals. Audiovisual looming signals elicited increased gamma-band coherence between these areas, relative to unimodal or receding-motion signals. This suggests that the neocortex uses fast, flexible intercortical interactions to mediate multisensory integration.


European Journal of Neuroscience | 2010

Dynamic, rhythmic facial expressions and the superior temporal sulcus of macaque monkeys: implications for the evolution of audiovisual speech.

Asif A. Ghazanfar; Chandramouli Chandrasekaran; Ryan J. Morrill

Audiovisual speech has a stereotypical rhythm that is between 2 and 7 Hz, and deviations from this frequency range in either modality reduce intelligibility. Understanding how audiovisual speech evolved requires investigating the origins of this rhythmic structure. One hypothesis is that the rhythm of speech evolved through the modification of some pre‐existing cyclical jaw movements in a primate ancestor. We tested this hypothesis by investigating the temporal structure of lipsmacks and teeth‐grinds of macaque monkeys and the neural responses to these facial gestures in the superior temporal sulcus (STS), a region implicated in the processing of audiovisual communication signals in both humans and monkeys. We found that both lipsmacks and teeth‐grinds have consistent but distinct peak frequencies and that both fall well within the 2–7 Hz range of mouth movements associated with audiovisual speech. Single neurons and local field potentials of the STS of monkeys readily responded to such facial rhythms, but also responded just as robustly to yawns, a nonrhythmic but dynamic facial expression. All expressions elicited enhanced power in the delta (0–3Hz), theta (3–8Hz), alpha (8–14Hz) and gamma (> 60 Hz) frequency ranges, and suppressed power in the beta (20–40Hz) range. Thus, STS is sensitive to, but not selective for, rhythmic facial gestures. Taken together, these data provide support for the idea that that audiovisual speech evolved (at least in part) from the rhythmic facial gestures of an ancestral primate and that the STS was sensitive to and thus ‘prepared’ for the advent of rhythmic audiovisual communication.


The Journal of Neuroscience | 2010

The Influence of Natural Scene Dynamics on Auditory Cortical Activity

Chandramouli Chandrasekaran; Hjalmar K. Turesson; Charles H. Brown; Asif A. Ghazanfar

The efficient cortical encoding of natural scenes is essential for guiding adaptive behavior. Because natural scenes and network activity in cortical circuits share similar temporal scales, it is necessary to understand how the temporal structure of natural scenes influences network dynamics in cortical circuits and spiking output. We examined the relationship between the structure of natural acoustic scenes and its impact on network activity [as indexed by local field potentials (LFPs)] and spiking responses in macaque primary auditory cortex. Natural auditory scenes led to a change in the power of the LFP in the 2–9 and 16–30 Hz frequency ranges relative to the ongoing activity. In contrast, ongoing rhythmic activity in the 9–16 Hz range was essentially unaffected by the natural scene. Phase coherence analysis showed that scene-related changes in LFP power were at least partially attributable to the locking of the LFP and spiking activity to the temporal structure in the scene, with locking extending up to 25 Hz for some scenes and cortical sites. Consistent with distributed place and temporal coding schemes, a key predictor of phase locking and power changes was the overlap between the spectral selectivity of a cortical site and the spectral structure of the scene. Finally, during the processing of natural acoustic scenes, spikes were locked to LFP phase at frequencies up to 30 Hz. These results are consistent with an idea that the cortical representation of natural scenes emerges from an interaction between network activity and stimulus dynamics.


Proceedings of the National Academy of Sciences of the United States of America | 2013

Dynamic faces speed up the onset of auditory cortical spiking responses during vocal detection

Chandramouli Chandrasekaran; Luis Lemus; Asif A. Ghazanfar

Significance We combine facial motion with voices to help us hear better, but the role that low-level sensory areas such as the auditory cortex may play in this process is unclear. We combined a vocalization detection task with auditory cortical physiology in monkeys to bridge this epistemic gap. Surprisingly, and contrary to previous assumptions and hypotheses, changes in firing rate had no clear relationship to the detection advantage that dynamic faces provided when listening for vocalizations. Instead, dynamic faces uniformly sped up the onset of spiking activity in the auditory cortex, and this faster onset partially explains the behavioral benefits of combining faces and voices. How low-level sensory areas help mediate the detection and discrimination advantages of integrating faces and voices is the subject of intense debate. To gain insights, we investigated the role of the auditory cortex in face/voice integration in macaque monkeys performing a vocal-detection task. Behaviorally, subjects were slower to detect vocalizations as the signal-to-noise ratio decreased, but seeing mouth movements associated with vocalizations sped up detection. Paralleling this behavioral relationship, as the signal to noise ratio decreased, the onset of spiking responses were delayed and magnitudes were decreased. However, when mouth motion accompanied the vocalization, these responses were uniformly faster. Conversely, and at odds with previous assumptions regarding the neural basis of face/voice integration, changes in the magnitude of neural responses were not related consistently to audiovisual behavior. Taken together, our data reveal that facilitation of spike latency is a means by which the auditory cortex partially mediates the reaction time benefits of combining faces and voices.


Neuron | 2007

Paving the Way Forward: Integrating the Senses through Phase-Resetting of Cortical Oscillations

Asif A. Ghazanfar; Chandramouli Chandrasekaran

Most, if not all, of the neocortex is multisensory, but the mechanisms by which different cortical areas - association versus sensory, for instance - integrate multisensory inputs are not known. The study by Lakatos et al. reveals that, in the primary auditory cortex, the phase of neural oscillations is reset by somatosensory inputs, and subsequent auditory inputs are enhanced or suppressed, depending on their timing relative to the oscillatory cycle.


Psihologija | 2010

Attentional Networks and Biological Motion

Chandramouli Chandrasekaran; Lucy Turner; Hh Bülthoff; Ian M. Thornton

Our ability to see meaningful actions when presented with point-light traces of human movement is commonly referred to as the perception of biological motion. While traditional explanations have emphasized the spontaneous and automatic nature of this ability, more recent findings suggest that attention may play a larger role than is typically assumed. In two studies we show that the speed and accuracy of responding to point-light stimuli is highly correlated with the ability to control selective attention. In our first experiment we measured thresholds for determining the walking direction of a masked point-light figure, and performance on a range of attention-related tasks in the same set of observers. Mask-density thresholds for the direction discrimination task varied quite considerably from observer to observer and this variation was highly correlated with performance on both Stroop and flanker interference tasks. Other components of attention, such as orienting, alerting and visual search efficiency, showed no such relationship. In a second experiment, we examined the relationship between the ability to determine the orientation of unmasked point-light actions and Stroop interference, again finding a strong correlation. Our results are consistent with previous research suggesting that biological motion processing may requite attention, and specifically implicate networks of attention related to executive control and selection.


Experimental Neurology | 2017

The need for calcium imaging in nonhuman primates: New motor neuroscience and brain-machine interfaces.

Daniel J. O'Shea; Eric Trautmann; Chandramouli Chandrasekaran; Sergey D. Stavisky; Jonathan C. Kao; Maneesh Sahani; Stephen I. Ryu; Karl Deisseroth; Krishna V. Shenoy

A central goal of neuroscience is to understand how populations of neurons coordinate and cooperate in order to give rise to perception, cognition, and action. Nonhuman primates (NHPs) are an attractive model with which to understand these mechanisms in humans, primarily due to the strong homology of their brains and the cognitively sophisticated behaviors they can be trained to perform. Using electrode recordings, the activity of one to a few hundred individual neurons may be measured electrically, which has enabled many scientific findings and the development of brain-machine interfaces. Despite these successes, electrophysiology samples sparsely from neural populations and provides little information about the genetic identity and spatial micro-organization of recorded neurons. These limitations have spurred the development of all-optical methods for neural circuit interrogation. Fluorescent calcium signals serve as a reporter of neuronal responses, and when combined with post-mortem optical clearing techniques such as CLARITY, provide dense recordings of neuronal populations, spatially organized and annotated with genetic and anatomical information. Here, we advocate that this methodology, which has been of tremendous utility in smaller animal models, can and should be developed for use with NHPs. We review here several of the key opportunities and challenges for calcium-based optical imaging in NHPs. We focus on motor neuroscience and brain-machine interface design as representative domains of opportunity within the larger field of NHP neuroscience.


Current Opinion in Neurobiology | 2017

Computational principles and models of multisensory integration

Chandramouli Chandrasekaran

Combining information from multiple senses creates robust percepts, speeds up responses, enhances learning, and improves detection, discrimination, and recognition. In this review, I discuss computational models and principles that provide insight into how this process of multisensory integration occurs at the behavioral and neural level. My initial focus is on drift-diffusion and Bayesian models that can predict behavior in multisensory contexts. I then highlight how recent neurophysiological and perturbation experiments provide evidence for a distributed redundant network for multisensory integration. I also emphasize studies which show that task-relevant variables in multisensory contexts are distributed in heterogeneous neural populations. Finally, I describe dimensionality reduction methods and recurrent neural network models that may help decipher heterogeneous neural populations involved in multisensory integration.

Collaboration


Dive into the Chandramouli Chandrasekaran's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stephen I. Ryu

Palo Alto Medical Foundation

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

William T. Newsome

Howard Hughes Medical Institute

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge