Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rajeev D. S. Raizada is active.

Publication


Featured researches published by Rajeev D. S. Raizada.


Vision Research | 2000

Contrast-sensitive perceptual grouping and object-based attention in the laminar circuits of primary visual cortex.

Stephen Grossberg; Rajeev D. S. Raizada

Recent neurophysiological studies have shown that primary visual cortex, or V1, does more than passively process image features using the feedforward filters suggested by Hubel and Wiesel. It also uses horizontal interactions to group features preattentively into object representations, and feedback interactions to selectively attend to these groupings. All neocortical areas, including V1, are organized into layered circuits. We present a neural model showing how the layered circuits in areas V1 and V2 enable feedforward, horizontal, and feedback interactions to complete perceptual groupings over positions that do not receive contrastive visual inputs, even while attention can only modulate or prime positions that do not receive such inputs. Recent neurophysiological data about how grouping and attention occur and interact in V1 are simulated and explained, and testable predictions are made. These simulations show how attention can selectively propagate along an object grouping and protect it from competitive masking, and how contextual stimuli can enhance or suppress groupings in a contrast-sensitive manner.


acm special interest group on data communication | 2010

NeuroPhone: brain-mobile phone interface using a wireless EEG headset

Andrew T. Campbell; Tanzeem Choudhury; Shaohan Hu; Hong Lu; Matthew K. Mukerjee; Mashfiqui Rabbi; Rajeev D. S. Raizada

Neural signals are everywhere just like mobile phones. We propose to use neural signals to control mobile phones for hands-free, silent and effortless human-mobile interaction. Until recently, devices for detecting neural signals have been costly, bulky and fragile. We present the design, implementation and evaluation of the NeuroPhone system, which allows neural signals to drive mobile phone applications on the iPhone using cheap off-the-shelf wireless electroencephalography (EEG) headsets. We demonstrate a brain-controlled address book dialing app, which works on similar principles to P300-speller brain-computer interfaces: the phone flashes a sequence of photos of contacts from the address book and a P300 brain potential is elicited when the flashed photo matches the person whom the user wishes to dial. EEG signals from the headset are transmitted wirelessly to an iPhone, which natively runs a lightweight classifier to discriminate P300 signals from noise. When a persons contact-photo triggers a P300, his/her phone number is automatically dialed. NeuroPhone breaks new ground as a brain-mobile phone interface for ubiquitous pervasive computing. We discuss the challenges in making our initial prototype more practical, robust, and reliable as part of our on-going research.


NeuroImage | 2008

Socioeconomic status predicts hemispheric specialisation of the left inferior frontal gyrus in young children.

Rajeev D. S. Raizada; Todd L. Richards; Andrew N. Meltzoff; Patricia K. Kuhl

Reading is a complex skill that is not mastered by all children. At the age of 5, on the cusp of prereading development, many factors combine to influence a childs future reading success, including neural and behavioural factors such as phonological awareness and the auditory processing of phonetic input, and environmental factors, such as socioeconomic status (SES). We investigated the interactions between these factors in 5-year-old children by administering a battery of standardised cognitive and linguistic tests, measuring SES with a standardised scale, and using fMRI to record neural activity during a behavioral task, rhyming, that is predictive of reading skills. Correlation tests were performed, and then corrected for multiple comparisons using the false discovery rate (FDR) procedure. It emerged that only one relationship linking neural with behavioural or environmental factors survived as significant after FDR correction: a correlation between SES and the degree of hemispheric specialisation in the left inferior frontal gyrus (IFG), a region which includes Brocas area. This neural-environmental link remained significant even after controlling for the childrens scores on the standardised language and cognition tests. In order to investigate possible environmental influences on the left IFG further, grey and white matter volumes were calculated. Marginally significant correlations with SES were found, indicating that environmental effects may manifest themselves in the brain anatomically as well as functionally. Collectively, these findings suggest that the weaker language skills of low-SES children are related to reduced underlying neural specialisation, and that these neural problems go beyond what is revealed by behavioural tests alone.


Frontiers in Human Neuroscience | 2010

Effects of socioeconomic status on brain development, and how cognitive neuroscience may contribute to leveling the playing field

Rajeev D. S. Raizada; Mark M. Kishiyama

The study of socioeconomic status (SES) and the brain finds itself in a circumstance unusual for Cognitive Neuroscience: large numbers of questions with both practical and scientific importance exist, but they are currently under-researched and ripe for investigation. This review aims to highlight these questions, to outline their potential significance, and to suggest routes by which they might be approached. Although remarkably few neural studies have been carried out so far, there exists a large literature of previous behavioural work. This behavioural research provides an invaluable guide for future neuroimaging work, but also poses an important challenge for it: how can we ensure that the neural data contributes predictive or diagnostic power over and above what can be derived from behaviour alone? We discuss some of the open mechanistic questions which Cognitive Neuroscience may have the power to illuminate, spanning areas including language, numerical cognition, stress, memory, and social influences on learning. These questions have obvious practical and societal significance, but they also bear directly on a set of longstanding questions in basic science: what are the environmental and neural factors which affect the acquisition and retention of declarative and nondeclarative skills? Perhaps the best opportunity for practical and theoretical interests to converge is in the study of interventions. Many interventions aimed at improving the cognitive development of low SES children are currently underway, but almost all are operating without either input from, or study by, the Cognitive Neuroscience community. Given that longitudinal intervention studies are very hard to set up, but can, with proper designs, be ideal tests of causal mechanisms, this area promises exciting opportunities for future research.


Visual Cognition | 2001

Context-Sensitive Binding by the Laminar Circuits of V1 and V2: A Unified Model of Perceptual Grouping, Attention, and Orientation Contrast

Rajeev D. S. Raizada; Stephen Grossberg

A detailed neural model is presented of how the laminar circuits of visual cortical areas V1 and V2 implement context-sensitive binding processes such as perceptual grouping and attention. The model proposes how specific laminar circuits allow the responses of visual cortical neurons to be determined not only by the stimuli within their classical receptive fields, but also to be strongly influenced by stimuli in the extra-classical surround. This context-sensitive visual processing can greatly enhance the analysis of visual scenes, especially those containing targets that are low contrast, partially occluded, or crowded by distractors. We show how interactions of feedforward, feedback, and horizontal circuitry can implement several types of contextual processing simultaneously, using shared laminar circuits. In particular, we present computer simulations that suggest how top-down attention and preattentive perceptual grouping, two processes that are fundamental for visual binding, can interact, with attentional enhancement selectively propagating along groupings of both real and illusory contours, thereby showing how attention can selectively enhance object representations. These simulations also illustrate how attention may have a stronger facilitatory effect on low contrast than on high contrast stimuli, and how pop-out from orientation contrast may occur. The specific functional roles which the model proposes for the cortical layers allow several testable neurophysiological predictions to be made. The results presented here simulate only the boundary grouping system of adult cortical architecture. However, we also discuss how this model contributes to a larger neural theory of vision that suggests how intracortical and intercortical feedback help to stabilize development and learning within these cortical circuits. Although feedback plays a key role, fast feedforward processing is possible in response to unambiguous information. Model circuits are capable of synchronizing quickly, but context-sensitive persistence of previous events can influence how synchrony develops. Although these results focus on how the interblob cortical processing stream controls boundary grouping and attention, related modelling of the blob cortical processing stream suggests how visible surfaces are formed, and modelling of the motion stream suggests how transient responses to scenic changes can control long-range apparent motion and also attract spatial attention.


Neuron | 2007

Selective Amplification of Stimulus Differences during Categorical Processing of Speech

Rajeev D. S. Raizada; Russell A. Poldrack

The brains perceptual stimuli are constantly changing: some of these changes are treated as invariances and are suppressed, whereas others are selectively amplified, giving emphasis to the distinctions that matter most. The starkest form of such amplification is categorical perception. In speech, for example, a continuum of phonetic stimuli gets carved into perceptually distinct categories. We used fMRI to measure the degree to which this process of selective amplification takes place. The most categorically processing area was the left supramarginal gyrus: stimuli from different phonetic categories, when presented together in a contrasting pair, were neurally amplified more than two-fold. Low-level auditory cortical areas, however, showed comparatively little amplification of changes that crossed category boundaries. Selective amplification serves to emphasize key stimulus differences, thereby shaping perceptual categories. The approach presented here provides a quantitative way to measure the degree to which such processing is taking place.


Journal of Cognitive Neuroscience | 2012

What makes different people's representations alike: Neural similarity space solves the problem of across-subject fmri decoding

Rajeev D. S. Raizada; Andrew C. Connolly

A central goal in neuroscience is to interpret neural activation and, moreover, to do so in a way that captures universal principles by generalizing across individuals. Recent research in multivoxel pattern-based fMRI analysis has led to considerable success at decoding within individual subjects. However, the goal of being able to decode across subjects is still challenging: It has remained unclear what population-level regularities of neural representation there might be. Here, we present a novel and highly accurate solution to this problem, which decodes across subjects between eight different stimulus conditions. The key to finding this solution was questioning the seemingly obvious idea that neural decoding should work directly on neural activation patterns. On the contrary, to decode across subjects, it is beneficial to abstract away from subject-specific patterns of neural activity and, instead, to operate on the similarity relations between those patterns: Our new approach performs decoding purely within similarity space. These results demonstrate a hitherto unknown population-level regularity in neural representation and also reveal a striking convergence between our empirical findings in fMRI and discussions in the philosophy of mind addressing the problem of conceptual similarity across neural diversity.


Cerebral Cortex | 2016

Predicting Neural Activity Patterns Associated with Sentences Using a Neurobiologically Motivated Model of Semantic Representation

Andrew J. Anderson; Jeffrey R. Binder; Leonardo Fernandino; Colin Humphries; Lisa L. Conant; Mario Aguilar; Xixi Wang; Donias Doko; Rajeev D. S. Raizada

We introduce an approach that predicts neural representations of word meanings contained in sentences then superposes these to predict neural representations of new sentences. A neurobiological semantic model based on sensory, motor, social, emotional, and cognitive attributes was used as a foundation to define semantic content. Previous studies have predominantly predicted neural patterns for isolated words, using models that lack neurobiological interpretation. Fourteen participants read 240 sentences describing everyday situations while undergoing fMRI. To connect sentence-level fMRI activation patterns to the word-level semantic model, we devised methods to decompose the fMRI data into individual words. Activation patterns associated with each attribute in the model were then estimated using multiple-regression. This enabled synthesis of activation patterns for trained and new words, which were subsequently averaged to predict new sentences. Region-of-interest analyses revealed that prediction accuracy was highest using voxels in the left temporal and inferior parietal cortex, although a broad range of regions returned statistically significant results, showing that semantic information is widely distributed across the brain. The results show how a neurobiologically motivated semantic model can decompose sentence-level fMRI data into activation features for component words, which can be recombined to predict activation patterns for new sentences.


NeuroImage | 2016

Representational similarity encoding for fMRI: Pattern-based synthesis to predict brain activity using stimulus-model-similarities.

Andrew J. Anderson; Benjamin D. Zinszer; Rajeev D. S. Raizada

Patterns of neural activity are systematically elicited as the brain experiences categorical stimuli and a major challenge is to understand what these patterns represent. Two influential approaches, hitherto treated as separate analyses, have targeted this problem by using model-representations of stimuli to interpret the corresponding neural activity patterns. Stimulus-model-based-encoding synthesizes neural activity patterns by first training weights to map between stimulus-model features and voxels. This allows novel model-stimuli to be mapped into voxel space, and hence the strength of the model to be assessed by comparing predicted against observed neural activity. Representational Similarity Analysis (RSA) assesses models by testing how well the grand structure of pattern-similarities measured between all pairs of model-stimuli aligns with the same structure computed from neural activity patterns. RSA does not require model fitting, but also does not allow synthesis of neural activity patterns, thereby limiting its applicability. We introduce a new approach, representational similarity-encoding, that builds on the strengths of RSA and robustly enables stimulus-model-based neural encoding without model fitting. The approach therefore sidesteps problems associated with overfitting that notoriously confront any approach requiring parameter estimation (and is consequently low cost computationally), and importantly enables encoding analyses to be incorporated within the wider Representational Similarity Analysis framework. We illustrate this new approach by using it to synthesize and decode fMRI patterns representing the meanings of words, and discuss its potential biological relevance to encoding in semantic memory. Our new similarity-based encoding approach unites the two previously disparate methods of encoding models and RSA, capturing the strengths of both, and enabling similarity-based synthesis of predicted fMRI patterns.


international conference on machine learning | 2011

Population codes representing musical timbre for high-level fMRI categorization of music genres

Michael A. Casey; Jessica Thompson; Olivia Kang; Rajeev D. S. Raizada; Thalia Wheatley

We present experimental evidence in support of distributed neural codes for timbre that are implicated in discrimination of musical styles. We used functional magnetic resonance imaging (fMRI) in humans and multivariate pattern analysis (MVPA) to identify activation patterns that encode the perception of rich music audio stimuli from five different musical styles. We show that musical styles can be automatically classified from population codes in bilateral superior temporal sulcus (STS). To investigate the possible link between the acoustic features of the auditory stimuli and neural population codes in STS, we conducted a representational similarity analysis and a multivariate regression-retrieval task. We found that the similarity structure of timbral features of our stimuli resembled the similarity structure of the STS more than any other type of acoustic feature. We also found that a regression model trained on timbral features outperformed models trained on other types of audio features. Our results show that human brain responses to complex, natural music can be differentiated by timbral audio features, emphasizing the importance of timbre in auditory perception.

Collaboration


Dive into the Rajeev D. S. Raizada's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Benjamin Zinszer

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xixi Wang

University of Rochester

View shared research outputs
Top Co-Authors

Avatar

Feng Lin

University of Rochester

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge