Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Srikantan S. Nagarajan is active.

Publication


Featured researches published by Srikantan S. Nagarajan.


Science | 2006

High Gamma Power Is Phase-Locked to Theta Oscillations in Human Neocortex

Ryan T. Canolty; Erik Edwards; Sarang S. Dalal; Maryam Soltani; Srikantan S. Nagarajan; Heidi E. Kirsch; Mitchel S. Berger; Nicholas M. Barbaro; Robert T. Knight

We observed robust coupling between the high- and low-frequency bands of ongoing electrical activity in the human brain. In particular, the phase of the low-frequency theta (4 to 8 hertz) rhythm modulates power in the high gamma (80 to 150 hertz) band of the electrocorticogram, with stronger modulation occurring at higher theta amplitudes. Furthermore, different behavioral tasks evoke distinct patterns of theta/high gamma coupling across the cortex. The results indicate that transient coupling between low- and high-frequency brain rhythms coordinates activity in distributed cortical areas, providing a mechanism for effective communication during cognitive processing in humans.


Science | 1996

Language Comprehension in Language-Learning Impaired Children Improved with Acoustically Modified Speech

Paula Tallal; Steve Miller; Gail Bedi; Gary Byma; Xiaoqin Wang; Srikantan S. Nagarajan; Christoph E. Schreiner; William M. Jenkins; Michael M. Merzenich

A speech processing algorithm was developed to create more salient versions of the rapidly changing elements in the acoustic waveform of speech that have been shown to be deficiently processed by language-learning impaired (LLI) children. LLI children received extensive daily training, over a 4-week period, with listening exercises in which all speech was translated into this synthetic form. They also received daily training with computer “games” designed to adaptively drive improvements in temporal processing thresholds. Significant improvements in speech discrimination and language comprehension abilities were demonstrated in two independent groups of LLI children.


NeuroImage | 2005

Localization bias and spatial resolution of adaptive and non-adaptive spatial filters for MEG source reconstruction

Kensuke Sekihara; Maneesh Sahani; Srikantan S. Nagarajan

This paper discusses the location bias and the spatial resolution in the reconstruction of a single dipole source by various spatial filtering techniques used for neuromagnetic imaging. We first analyze the location bias for several representative adaptive and non-adaptive spatial filters using their resolution kernels. This analysis theoretically validates previously reported empirical findings that standardized low-resolution electromagnetic tomography (sLORETA) has no location bias. We also find that the minimum-variance spatial filter does exhibit bias in the reconstructed location of a single source, but that this bias is eliminated by using the normalized lead field. We then focus on the comparison of sLORETA and the lead-field normalized minimum-variance spatial filter, and analyze the effect of noise on source location bias. We find that the signal-to-noise ratio (SNR) in the measurements determines whether the sLORETA reconstruction has source location bias, while the lead-field normalized minimum-variance spatial filter has no location bias even in the presence of noise. Finally, we compare the spatial resolution for sLORETA and the minimum-variance filter, and show that the minimum-variance filter attains much higher resolution than sLORETA does. The results of these analyses are validated by numerical experiments as well as by reconstructions based on two sets of evoked magnetic responses.


Pediatric Research | 2011

Sensory Processing in Autism: A Review of Neurophysiologic Findings

Elysa J. Marco; Leighton B. Hinkley; Susanna S. Hill; Srikantan S. Nagarajan

Atypical sensory-based behaviors are a ubiquitous feature of autism spectrum disorders (ASDs). In this article, we review the neural underpinnings of sensory processing in autism by reviewing the literature on neurophysiological responses to auditory, tactile, and visual stimuli in autistic individuals. We review studies of unimodal sensory processing and multisensory integration that use a variety of neuroimaging techniques, including electroencephalography (EEG), magnetoencephalography (MEG), and functional MRI. We then explore the impact of covert and overt attention on sensory processing. With additional characterization, neurophysiologic profiles of sensory processing in ASD may serve as valuable biomarkers for diagnosis and monitoring of therapeutic interventions for autism and reveal potential strategies and target brain regions for therapeutic interventions.


Journal of Cognitive Neuroscience | 2002

Modulation of the Auditory Cortex during Speech: An MEG Study

John F. Houde; Srikantan S. Nagarajan; Kensuke Sekihara; Michael M. Merzenich

Several behavioral and brain imaging studies have demonstrated a significant interaction between speech perception and speech production. In this study, auditory cortical responses to speech were examined during self-production and feedback alteration. Magnetic field recordings were obtained from both hemispheres in subjects who spoke while hearing controlled acoustic versions of their speech feedback via earphones. These responses were compared to recordings made while subjects listened to a tape playback of their production. The amplitude of tape playback was adjusted to match the amplitude of self-produced speech. Recordings of evoked responses to both self-produced and tape-recorded speech were obtained free of movement-related artifacts. Responses to self-produced speech were weaker than were responses to tape-recorded speech. Responses to tones were also weaker during speech production, when compared with responses to tones recorded in the presence of speech from tape playback. However, responses evoked by gated noise stimuli did not differ for recordings made during self-produced speech versus recordings made during tape-recorded speech playback. These data suggest that during speech production, the auditory cortex (1) attenuates its sensitivity and (2) modulates its activity as a function of the expected acoustic feedback.


Proceedings of the National Academy of Sciences of the United States of America | 2001

Speech comprehension is correlated with temporal response patterns recorded from auditory cortex

Ehud Ahissar; Srikantan S. Nagarajan; Merav Ahissar; Athanassios Protopapas; Henry W. Mahncke; Michael M. Merzenich

Speech comprehension depends on the integrity of both the spectral content and temporal envelope of the speech signal. Although neural processing underlying spectral analysis has been intensively studied, less is known about the processing of temporal information. Most of speech information conveyed by the temporal envelope is confined to frequencies below 16 Hz, frequencies that roughly match spontaneous and evoked modulation rates of primary auditory cortex neurons. To test the importance of cortical modulation rates for speech processing, we manipulated the frequency of the temporal envelope of speech sentences and tested the effect on both speech comprehension and cortical activity. Magnetoencephalographic signals from the auditory cortices of human subjects were recorded while they were performing a speech comprehension task. The test sentences used in this task were compressed in time. Speech comprehension was degraded when sentence stimuli were presented in more rapid (more compressed) forms. We found that the average comprehension level, at each compression, correlated with (i) the similarity between the frequencies of the temporal envelopes of the stimulus and the subjects cortical activity (“stimulus-cortex frequency-matching”) and (ii) the phase-locking (PL) between the two temporal envelopes (“stimulus-cortex PL”). Of these two correlates, PL was significantly more indicative for single-trial success. Our results suggest that the match between the speech rate and the a priori modulation capacities of the auditory cortex is a prerequisite for comprehension. However, this is not sufficient: stimulus-cortex PL should be achieved during actual sentence presentation.


NeuroImage | 2011

Measuring functional connectivity using MEG: Methodology and comparison with fcMRI

Matthew J. Brookes; Joanne R. Hale; Johanna M. Zumer; Claire M. Stevenson; Gareth R. Barnes; Julia P. Owen; Peter G. Morris; Srikantan S. Nagarajan

Functional connectivity (FC) between brain regions is thought to be central to the way in which the brain processes information. Abnormal connectivity is thought to be implicated in a number of diseases. The ability to study FC is therefore a key goal for neuroimaging. Functional connectivity (fc) MRI has become a popular tool to make connectivity measurements but the technique is limited by its indirect nature. A multimodal approach is therefore an attractive means to investigate the electrodynamic mechanisms underlying hemodynamic connectivity. In this paper, we investigate resting state FC using fcMRI and magnetoencephalography (MEG). In fcMRI, we exploit the advantages afforded by ultra high magnetic field. In MEG we apply envelope correlation and coherence techniques to source space projected MEG signals. We show that beamforming provides an excellent means to measure FC in source space using MEG data. However, care must be taken when interpreting these measurements since cross talk between voxels in source space can potentially lead to spurious connectivity and this must be taken into account in all studies of this type. We show good spatial agreement between FC measured independently using MEG and fcMRI; FC between sensorimotor cortices was observed using both modalities, with the best spatial agreement when MEG data are filtered into the β band. This finding helps to reduce the potential confounds associated with each modality alone: while it helps reduce the uncertainties in spatial patterns generated by MEG (brought about by the ill posed inverse problem), addition of electrodynamic metric confirms the neural basis of fcMRI measurements. Finally, we show that multiple MEG based FC metrics allow the potential to move beyond what is possible using fcMRI, and investigate the nature of electrodynamic connectivity. Our results extend those from previous studies and add weight to the argument that neural oscillations are intimately related to functional connectivity and the BOLD response.


Journal of Cognitive Neuroscience | 2001

Relations between the Neural Bases of Dynamic Auditory Processing and Phonological Processing: Evidence from fMRI

Russell A. Poldrack; Elise Temple; Athanassios Protopapas; Srikantan S. Nagarajan; Paula Tallal; Michael M. Merzenich; John D. E. Gabrieli

Functional magnetic resonance imaging (fMRI) was used to examine how the brain responds to temporal compression of speech and to determine whether the same regions are also involved in phonological processes associated with reading. Recorded speech was temporally compressed to varying degrees and presented in a sentence verification task. Regions involved in phonological processing were identified in a separate scan using a rhyming judgment task with pseudowords compared to a lettercase judgment task. The left inferior frontal and left superior temporal regions (Brocas and Wernickes areas), along with the right inferior frontal cortex, demonstrated a convex response to speech compression; their activity increased as compression increased, but then decreased when speech became incomprehensible. Other regions exhibited linear increases in activity as compression increased, including the middle frontal gyri bilaterally. The auditory cortices exhibited compression-related decreases bilaterally, primarily reflecting a decrease in activity when speech became incomprehensible. Rhyme judgments engaged two left inferior frontal gyrus regions (pars triangularis and pars opercularis), of which only the pars triangularis region exhibited significant compression-related activity. These results directly demonstrate that a subset of the left inferior frontal regions involved in phonological processing is also sensitive to transient acoustic features within the range of comprehensible speech.


IEEE Journal of Selected Topics in Signal Processing | 2010

Iterative Reweighted

David P. Wipf; Srikantan S. Nagarajan

A variety of practical methods have recently been introduced for finding maximally sparse representations from overcomplete dictionaries, a central computational task in compressive sensing applications as well as numerous others. Many of the underlying algorithms rely on iterative reweighting schemes that produce more focal estimates as optimization progresses. Two such variants are iterative reweighted l1 and l2 minimization; however, some properties related to convergence and sparse estimation, as well as possible generalizations, are still not clearly understood or fully exploited. In this paper, we make the distinction between separable and non-separable iterative reweighting algorithms. The vast majority of existing methods are separable, meaning the weighting of a given coefficient at each iteration is only a function of that individual coefficient from the previous iteration (as opposed to dependency on all coefficients). We examine two such separable reweighting schemes: an l2 method from Chartrand and Yin (2008) and an l1 approach from Candes (2008), elaborating on convergence results and explicit connections between them. We then explore an interesting non-separable alternative that can be implemented via either l2 or l1 reweighting and maintains several desirable properties relevant to sparse recovery despite a highly non-convex underlying cost function. For example, in the context of canonical sparse estimation problems, we prove uniform superiority of this method over the minimum l1 solution in that, 1) it can never do worse when implemented with reweighted l1, and 2) for any dictionary and sparsity profile, there will always exist cases where it does better. These results challenge the prevailing reliance on strictly convex (and separable) penalty functions for finding sparse solutions. We then derive a new non-separable variant with similar properties that exhibits further performance improvements in empirical tests. Finally, we address natural extensions to group sparsity problems and non-negative sparse coding.


NeuroImage | 2009

\ell_1

David P. Wipf; Srikantan S. Nagarajan

The ill-posed nature of the MEG (or related EEG) source localization problem requires the incorporation of prior assumptions when choosing an appropriate solution out of an infinite set of candidates. Bayesian approaches are useful in this capacity because they allow these assumptions to be explicitly quantified using postulated prior distributions. However, the means by which these priors are chosen, as well as the estimation and inference procedures that are subsequently adopted to affect localization, have led to a daunting array of algorithms with seemingly very different properties and assumptions. From the vantage point of a simple Gaussian scale mixture model with flexible covariance components, this paper analyzes and extends several broad categories of Bayesian inference directly applicable to source localization including empirical Bayesian approaches, standard MAP estimation, and multiple variational Bayesian (VB) approximations. Theoretical properties related to convergence, global and local minima, and localization bias are analyzed and fast algorithms are derived that improve upon existing methods. This perspective leads to explicit connections between many established algorithms and suggests natural extensions for handling unknown dipole orientations, extended source configurations, correlated sources, temporal smoothness, and computational expediency. Specific imaging methods elucidated under this paradigm include the weighted minimum l(2)-norm, FOCUSS, minimum current estimation, VESTAL, sLORETA, restricted maximum likelihood, covariance component estimation, beamforming, variational Bayes, the Laplace approximation, and automatic relevance determination, as well as many others. Perhaps surprisingly, all of these methods can be formulated as particular cases of covariance component estimation using different concave regularization terms and optimization rules, making general theoretical analyses and algorithmic extensions/improvements particularly relevant.

Collaboration


Dive into the Srikantan S. Nagarajan's collaboration.

Top Co-Authors

Avatar

Kensuke Sekihara

Tokyo Metropolitan University

View shared research outputs
Top Co-Authors

Avatar

John F. Houde

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Julia P. Owen

University of California

View shared research outputs
Top Co-Authors

Avatar

Susanne Honma

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge