Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rajesh P. N. Rao is active.

Publication


Featured researches published by Rajesh P. N. Rao.


Nature Neuroscience | 1999

Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects

Rajesh P. N. Rao; Dana H. Ballard

We describe a model of visual processing in which feedback connections from a higher- to a lower- order visual cortical area carry predictions of lower-level neural activities, whereas the feedforward connections carry the residual errors between the predictions and the actual lower-level activities. When exposed to natural images, a hierarchical network of model neurons implementing such a model developed simple-cell-like receptive fields. A subset of neurons responsible for carrying the residual errors showed endstopping and other extra-classical receptive-field effects. These results suggest that rather than being exclusively feedforward phenomena, nonclassical surround effects in the visual cortex may also result from cortico-cortical feedback as a consequence of the visual system using an efficient hierarchical strategy for encoding natural images.


The Journal of Neuroscience | 2007

Spectral changes in cortical surface potentials during motor movement

Kai J. Miller; Eric C. Leuthardt; Rajesh P. N. Rao; Nicholas R. Anderson; Daniel W. Moran; John W. Miller; Jeffrey G. Ojemann

In the first large study of its kind, we quantified changes in electrocorticographic signals associated with motor movement across 22 subjects with subdural electrode arrays placed for identification of seizure foci. Patients underwent a 5–7 d monitoring period with array placement, before seizure focus resection, and during this time they participated in the study. An interval-based motor-repetition task produced consistent and quantifiable spectral shifts that were mapped on a Talairach-standardized template cortex. Maps were created independently for a high-frequency band (HFB) (76–100 Hz) and a low-frequency band (LFB) (8–32 Hz) for several different movement modalities in each subject. The power in relevant electrodes consistently decreased in the LFB with movement, whereas the power in the HFB consistently increased. In addition, the HFB changes were more focal than the LFB changes. Sites of power changes corresponded to stereotactic locations in sensorimotor cortex and to the results of individual clinical electrical cortical mapping. Sensorimotor representation was found to be somatotopic, localized in stereotactic space to rolandic cortex, and typically followed the classic homunculus with limited extrarolandic representation.


Journal of Neural Engineering | 2006

Towards adaptive classification for BCI

Pradeep Shenoy; Matthias Krauledat; Benjamin Blankertz; Rajesh P. N. Rao; Klaus-Robert Müller

Non-stationarities are ubiquitous in EEG signals. They are especially apparent in the use of EEG-based brain-computer interfaces (BCIs): (a) in the differences between the initial calibration measurement and the online operation of a BCI, or (b) caused by changes in the subjects brain processes during an experiment (e.g. due to fatigue, change of task involvement, etc). In this paper, we quantify for the first time such systematic evidence of statistical differences in data recorded during offline and online sessions. Furthermore, we propose novel techniques of investigating and visualizing data distributions, which are particularly useful for the analysis of (non-)stationarities. Our study shows that the brain signals used for control can change substantially from the offline calibration sessions to online control, and also within a single session. In addition to this general characterization of the signals, we propose several adaptive classification schemes and study their performance on data recorded during online experiments. An encouraging result of our study is that surprisingly simple adaptive methods in combination with an offline feature selection scheme can significantly increase BCI performance.


Journal of Neural Engineering | 2008

Control of a humanoid robot by a noninvasive brain-computer interface in humans

Christian J. Bell; Pradeep Shenoy; Rawichote Chalodhorn; Rajesh P. N. Rao

We describe a brain-computer interface for controlling a humanoid robot directly using brain signals obtained non-invasively from the scalp through electroencephalography (EEG). EEG has previously been used for tasks such as controlling a cursor and spelling a word, but it has been regarded as an unlikely candidate for more complex forms of control owing to its low signal-to-noise ratio. Here we show that by leveraging advances in robotics, an interface based on EEG can be used to command a partially autonomous humanoid robot to perform complex tasks such as walking to specific locations and picking up desired objects. Visual feedback from the robots cameras allows the user to select arbitrary objects in the environment for pick-up and transport to chosen locations. Results from a study involving nine users indicate that a command for the robot can be selected from four possible choices in 5 s with 95% accuracy. Our results demonstrate that an EEG-based brain-computer interface can be used for sophisticated robotic interaction with the environment, involving not only navigation as in previous applications but also manipulation and transport of objects.


Neural Computation | 1997

Dynamic Model of Visual Recognition Predicts Neural Response Properties in the Visual Cortex

Rajesh P. N. Rao; Dana H. Ballard

The responses of visual cortical neurons during fixation tasks can be significantly modulated by stimuli from beyond the classical receptive field. Modulatory effects in neural responses have also been recently reported in a task where a monkey freely views a natural scene. In this article, we describe a hierarchical network model of visual recognition that explains these experimental observations by using a form of the extended Kalman filter as given by the minimum description length (MDL) principle. The model dynamically combines input-driven bottom-up signals with expectation-driven top-down signals to predict current recognition state. Synaptic weights in the model are adapted in a Hebbian manner according to a learning rule also derived from the MDL principle. The resulting prediction-learning scheme can be viewed as implementing a form of the expectation-maximization (EM) algorithm. The architecture of the model posits an active computational role for the reciprocal connections between adjoining visual cortical areas in determining neural response properties. In particular, the model demonstrates the possible role of feedback from higher cortical areas in mediating neurophysiological effects due to stimuli from beyond the classical receptive field. Simulations of the model are provided that help explain the experimental observations regarding neural responses in both free viewing and fixating conditions.


Vision Research | 2002

Eye movements in iconic visual search

Rajesh P. N. Rao; Gregory J. Zelinsky; Mary Hayhoe; Dana H. Ballard

Visual cognition depends critically on the moment-to-moment orientation of gaze. To change the gaze to a new location in space, that location must be computed and used by the oculomotor system. One of the most common sources of information for this computation is the visual appearance of an object. A crucial question is: How is the appearance information contained in the photometric array is converted into a target position? This paper proposes a such a model that accomplishes this calculation. The model uses iconic scene representations derived from oriented spatiochromatic filters at multiple scales. Visual search for a target object proceeds in a coarse-to-fine fashion with the targets largest scale filter responses being compared first. Task-relevant target locations are represented as saliency maps which are used to program eye movements. A central feature of the model is that it separates the targeting process, which changes gaze, from the decision process, which extracts information at or near the new gaze point to guide behavior. The model provides a detailed explanation for center-of-gravity saccades that have been observed in many previous experiments. In addition, the models targeting performance has been compared with the eye movements of human subjects under identical conditions in natural visual search tasks. The results show good agreement both quantitatively (the search paths are strikingly similar) and qualitatively (the fixations of false targets are comparable).


Proceedings of the National Academy of Sciences of the United States of America | 2010

Cortical activity during motor execution, motor imagery, and imagery-based online feedback

Kai J. Miller; Eberhard E. Fetz; Marcel den Nijs; Jeffrey G. Ojemann; Rajesh P. N. Rao

Imagery of motor movement plays an important role in learning of complex motor skills, from learning to serve in tennis to perfecting a pirouette in ballet. What and where are the neural substrates that underlie motor imagery-based learning? We measured electrocorticographic cortical surface potentials in eight human subjects during overt action and kinesthetic imagery of the same movement, focusing on power in “high frequency” (76–100 Hz) and “low frequency” (8–32 Hz) ranges. We quantitatively establish that the spatial distribution of local neuronal population activity during motor imagery mimics the spatial distribution of activity during actual motor movement. By comparing responses to electrocortical stimulation with imagery-induced cortical surface activity, we demonstrate the role of primary motor areas in movement imagery. The magnitude of imagery-induced cortical activity change was ∼25% of that associated with actual movement. However, when subjects learned to use this imagery to control a computer cursor in a simple feedback task, the imagery-induced activity change was significantly augmented, even exceeding that of overt movement.


Artificial Intelligence | 1995

An active vision architecture based on iconic representations

Rajesh P. N. Rao; Dana H. Ballard

Abstract Active vision systems have the capability of continuously interacting with the environment. The rapidly changing environment of such systems means that it is attractive to replace static representations with visual routines that compute information on demand. Such routines place a premium on image data structures that are easily computed and used. The purpose of this paper is to propose a general active vision architecture based on efficiently computable iconic representations. This architecture employs two primary visual routines, one for identifying the visual image near the fovea (object identification), and another for locating a stored prototype on the retina (object location). This design allows complex visual behaviors to be obtained by composing these two routines with different parameters. The iconic representations are comprised of high-dimensional feature vectors obtained from the responses of an ensemble of Gaussian derivative spatial filters at a number of orientations and scales. These representations are stored in two separate memories. One memory is indexed by image coordinates while the other is indexed by object coordinates. Object location matches a localized set of model features with image features at all possible retinal locations. Object identification matches a foveal set of image features with all possible model features. We present experimental results for a near real-time implementation of these routines on a pipeline image processor and suggest relatively simple strategies for tackling the problems of occlusions and scale variations. We also discuss two additional visual routines, one for top-down foveal targeting using log-polar sensors and another for looming detection, which are facilitated by the proposed architecture.


Frontiers in Computational Neuroscience | 2010

Decision Making Under Uncertainty: A Neural Model Based on Partially Observable Markov Decision Processes

Rajesh P. N. Rao

A fundamental problem faced by animals is learning to select actions based on noisy sensory information and incomplete knowledge of the world. It has been suggested that the brain engages in Bayesian inference during perception but how such probabilistic representations are used to select actions has remained unclear. Here we propose a neural model of action selection and decision making based on the theory of partially observable Markov decision processes (POMDPs). Actions are selected based not on a single “optimal” estimate of state but on the posterior distribution over states (the “belief” state). We show how such a model provides a unified framework for explaining experimental results in decision making that involve both information gathering and overt actions. The model utilizes temporal difference (TD) learning for maximizing expected reward. The resulting neural architecture posits an active role for the neocortex in belief computation while ascribing a role to the basal ganglia in belief representation, value computation, and action selection. When applied to the random dots motion discrimination task, model neurons representing belief exhibit responses similar to those of LIP neurons in primate neocortex. The appropriate threshold for switching from information gathering to overt actions emerges naturally during reward maximization. Additionally, the time course of reward prediction error in the model shares similarities with dopaminergic responses in the basal ganglia during the random dots task. For tasks with a deadline, the model learns a decision making strategy that changes with elapsed time, predicting a collapsing decision threshold consistent with some experimental studies. The model provides a new framework for understanding neural decision making and suggests an important role for interactions between the neocortex and the basal ganglia in learning the mapping between probabilistic sensory representations and actions that maximize rewards.


Neural Computation | 2004

Bayesian computation in recurrent neural circuits

Rajesh P. N. Rao

A large number of human psychophysical results have been successfully explained in recent years using Bayesian models. However, the neural implementation of such models remains largely unclear. In this article, we show that a network architecture commonly used to model the cerebral cortex can implement Bayesian inference for an arbitrary hidden Markov model. We illustrate the approach using an orientation discrimination task and a visual motion detection task. In the case of orientation discrimination, we show that the model network can infer the posterior distribution over orientations and correctly estimate stimulus orientation in the presence of significant noise. In the case of motion detection, we show that the resulting model network exhibits direction selectivity and correctly computes the posterior probabilities over motion direction and position. When used to solve the well-known random dots motion discrimination task, the model generates responses that mimic the activities of evidence-accumulating neurons in cortical areas LIP and FEF. The framework we introduce posits a new interpretation of cortical activities in terms of log posterior probabilities of stimuli occurring in the natural world.

Collaboration


Dive into the Rajesh P. N. Rao's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dana H. Ballard

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Aaron P. Shon

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pradeep Shenoy

University of Washington

View shared research outputs
Top Co-Authors

Avatar

Reinhold Scherer

Graz University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Terrence J. Sejnowski

Salk Institute for Biological Studies

View shared research outputs
Researchain Logo
Decentralizing Knowledge