Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Pavan Ramkumar is active.

Publication


Featured researches published by Pavan Ramkumar.


Nature Communications | 2016

Chunking as the result of an efficiency computation trade-off

Pavan Ramkumar; Daniel E. Acuna; Max Berniker; Scott T. Grafton; Robert S. Turner; Konrad P. Körding

How to move efficiently is an optimal control problem, whose computational complexity grows exponentially with the horizon of the planned trajectory. Breaking a compound movement into a series of chunks, each planned over a shorter horizon can thus reduce the overall computational complexity and associated costs while limiting the achievable efficiency. This trade-off suggests a cost-effective learning strategy: to learn new movements we should start with many short chunks (to limit the cost of computation). As practice reduces the impediments to more complex computation, the chunking structure should evolve to allow progressively more efficient movements (to maximize efficiency). Here we show that monkeys learning a reaching sequence over an extended period of time adopt this strategy by performing movements that can be described as locally optimal trajectories. Chunking can thus be understood as a cost-effective strategy for producing and learning efficient movements.


PLOS ONE | 2016

Premotor and Motor Cortices Encode Reward.

Pavan Ramkumar; Brian M. Dekleva; Sam Cooler; Lee E. Miller; Konrad P. Körding

Rewards associated with actions are critical for motivation and learning about the consequences of one’s actions on the world. The motor cortices are involved in planning and executing movements, but it is unclear whether they encode reward over and above limb kinematics and dynamics. Here, we report a categorical reward signal in dorsal premotor (PMd) and primary motor (M1) neurons that corresponds to an increase in firing rates when a trial was not rewarded regardless of whether or not a reward was expected. We show that this signal is unrelated to error magnitude, reward prediction error, or other task confounds such as reward consumption, return reach plan, or kinematic differences across rewarded and unrewarded trials. The availability of reward information in motor cortex is crucial for theories of reward-based learning and motivational influences on actions.


Journal of Neurophysiology | 2016

Feature-based attention and spatial selection in frontal eye fields during natural scene search

Pavan Ramkumar; Patrick N. Lawlor; Joshua I. Glaser; Daniel K. Wood; Adam N. Phillips; Mark A. Segraves; Konrad P. Körding

When we search for visual objects, the features of those objects bias our attention across the visual landscape (feature-based attention). The brain uses these top-down cues to select eye movement targets (spatial selection). The frontal eye field (FEF) is a prefrontal brain region implicated in selecting eye movements and is thought to reflect feature-based attention and spatial selection. Here, we study how FEF facilitates attention and selection in complex natural scenes. We ask whether FEF neurons facilitate feature-based attention by representing search-relevant visual features or whether they are primarily involved in selecting eye movement targets in space. We show that search-relevant visual features are weakly predictive of gaze in natural scenes and additionally have no significant influence on FEF activity. Instead, FEF activity appears to primarily correlate with the direction of the upcoming eye movement. Our result demonstrates a concrete need for better models of natural scene search and suggests that FEF activity during natural scene search is explained primarily by spatial selection.


eLife | 2016

Uncertainty leads to persistent effects on reach representations in dorsal premotor cortex

Brian M. Dekleva; Pavan Ramkumar; Paul A Wanda; Konrad P. Körding; Lee E. Miller

Every movement we make represents one of many possible actions. In reaching tasks with multiple targets, dorsal premotor cortex (PMd) appears to represent all possible actions simultaneously. However, in many situations we are not presented with explicit choices. Instead, we must estimate the best action based on noisy information and execute it while still uncertain of our choice. Here we asked how both primary motor cortex (M1) and PMd represented reach direction during a task in which a monkey made reaches based on noisy, uncertain target information. We found that with increased uncertainty, neurons in PMd actually enhanced their representation of unlikely movements throughout both planning and execution. The magnitude of this effect was highly variable across sessions, and was correlated with a measure of the monkeys’ behavioral uncertainty. These effects were not present in M1. Our findings suggest that PMd represents and maintains a full distribution of potentially correct actions. DOI: http://dx.doi.org/10.7554/eLife.14316.001


Journal of Neurophysiology | 2016

Role of expected reward in frontal eye field during natural scene search.

Joshua I. Glaser; Daniel K. Wood; Patrick N. Lawlor; Pavan Ramkumar; Konrad P. Körding; Mark A. Segraves

When a saccade is expected to result in a reward, both neural activity in oculomotor areas and the saccade itself (e.g., its vigor and latency) are altered (compared with when no reward is expected). As such, it is unclear whether the correlations of neural activity with reward indicate a representation of reward beyond a movement representation; the modulated neural activity may simply represent the differences in motor output due to expected reward. Here, to distinguish between these possibilities, we trained monkeys to perform a natural scene search task while we recorded from the frontal eye field (FEF). Indeed, when reward was expected (i.e., saccades to the target), FEF neurons showed enhanced responses. Moreover, when monkeys accidentally made eye movements to the target, firing rates were lower than when they purposively moved to the target. Thus, neurons were modulated by expected reward rather than simply the presence of the target. We then fit a model that simultaneously included components related to expected reward and saccade parameters. While expected reward led to shorter latency and higher velocity saccades, these behavioral changes could not fully explain the increased FEF firing rates. Thus, FEF neurons appear to encode motivational factors such as reward expectation, above and beyond the kinematic and behavioral consequences of imminent reward.


bioRxiv | 2017

Modern machine learning far outperforms GLMs at predicting spikes

Ari S. Benjamin; Hugo L. Fernandes; Tucker Tomlinson; Pavan Ramkumar; Chris VerSteeg; Lee E. Miller; Konrad P. Körding

Neuroscience has long focused on finding encoding models that effectively ask “what predicts neural spiking?” and generalized linear models (GLMs) are a typical approach. Modern machine learning techniques have the potential to perform better. Here we directly compared GLMs to three leading methods: feedforward neural networks, gradient boosted trees, and stacked ensembles that combine the predictions of several methods. We predicted spike counts in macaque motor (M1) and somatosensory (S1) cortices from reaching kinematics, and in rat hippocampal cells from open field location and orientation. In general, the modern methods produced far better spike predictions and were less sensitive to the preprocessing of features. XGBoost and the ensemble were the best-performing methods and worked well even on neural data with very low spike rates. This overall performance suggests that tuning curves built with GLMs are at times inaccurate and can be easily improved upon. Our publicly shared code uses standard packages and can be quickly applied to other datasets. Encoding models built with machine learning techniques more accurately predict spikes and can offer meaningful benchmarks for simpler models.


Journal of Vision | 2015

Modeling peripheral visual acuity enables discovery of gaze strategies at multiple time scales during natural scene search

Pavan Ramkumar; Hugo L. Fernandes; Konrad P. Körding; Mark A. Segraves

Like humans, monkeys make saccades nearly three times a second. To understand the factors guiding this frequent decision, computational models of vision attempt to predict fixation locations using bottom-up visual features and top-down goals. How do the relative influences of these factors evolve over multiple time scales? Here we analyzed visual features at fixations using a retinal transform that provides realistic visual acuity by suitably degrading visual information in the periphery. In a task in which monkeys searched for a Gabor target in natural scenes, we characterized the relative importance of bottom-up and task-relevant influences by decoding fixated from nonfixated image patches based on visual features. At fast time scales, we found that search strategies can vary over the course of a single trial, with locations of higher saliency, target-similarity, edge–energy, and orientedness looked at later on in the trial. At slow time scales, we found that search strategies can be refined over several weeks of practice, and the influence of target orientation was significant only in the latter of two search tasks. Critically, these results were not observed without applying the retinal transform. Our results suggest that saccade-guidance strategies become apparent only when models take into account degraded visual representation in the periphery.


Nature Communications | 2018

Population coding of conditional probability distributions in dorsal premotor cortex

Joshua I. Glaser; Matthew G. Perich; Pavan Ramkumar; Lee E. Miller; Konrad P. Körding

Our bodies and the environment constrain our movements. For example, when our arm is fully outstretched, we cannot extend it further. More generally, the distribution of possible movements is conditioned on the state of our bodies in the environment, which is constantly changing. However, little is known about how the brain represents such distributions, and uses them in movement planning. Here, we record from dorsal premotor cortex (PMd) and primary motor cortex (M1) while monkeys reach to randomly placed targets. The hand’s position within the workspace creates probability distributions of possible upcoming targets, which affect movement trajectories and latencies. PMd, but not M1, neurons have increased activity when the monkey’s hand position makes it likely the upcoming movement will be in the neurons’ preferred directions. Across the population, PMd activity represents probability distributions of individual upcoming reaches, which depend on rapidly changing information about the body’s state in the environment.Movements are continually constrained by the current body position and its relation to the surroundings. Here the authors report that the population activity of monkey dorsal premotor cortex neurons dynamically represents the probability distribution of possible reach directions.


Frontiers in Computational Neuroscience | 2018

Modern Machine Learning as a Benchmark for Fitting Neural Responses

Ari S. Benjamin; Hugo L. Fernandes; Tucker Tomlinson; Pavan Ramkumar; Chris VerSteeg; Raeed H. Chowdhury; Lee E. Miller; Konrad P. Körding

Neuroscience has long focused on finding encoding models that effectively ask “what predicts neural spiking?” and generalized linear models (GLMs) are a typical approach. It is often unknown how much of explainable neural activity is captured, or missed, when fitting a model. Here we compared the predictive performance of simple models to three leading machine learning methods: feedforward neural networks, gradient boosted trees (using XGBoost), and stacked ensembles that combine the predictions of several methods. We predicted spike counts in macaque motor (M1) and somatosensory (S1) cortices from standard representations of reaching kinematics, and in rat hippocampal cells from open field location and orientation. Of these methods, XGBoost and the ensemble consistently produced more accurate spike rate predictions and were less sensitive to the preprocessing of features. These methods can thus be applied quickly to detect if feature sets relate to neural activity in a manner not captured by simpler methods. Encoding models built with a machine learning approach accurately predict spike rates and can offer meaningful benchmarks for simpler models.


Journal of Vision | 2015

A rapid whole-brain neural portrait of scene category inference

Pavan Ramkumar; Bruce C. Hansen; Sebastian Pannasch; Lester C. Loschky

Perceiving the world around us is a process of active inference from incoming visual information. One opportunity to study brain processes underlying perceptual inference is when perception deviates from reality. Here, we focus on errors in rapid scene categorization. How do humans accurately categorize natural scenes after extremely brief presentations (< 20 ms)? To elucidate the role of brain areas involved in visual encoding and perceptual inference, we measured cortical activity using whole-scalp magnetoencephalography (MEG). Scenes were flashed for 33 ms and subjects responded with one of six scene categories. We localized single-trial sensor-level data to the cortical surface reconstructed from individual MRIs. Next, we computed categorization confusion matrices (CMs) using support vector machines based on (1) cortical activity, (2) spatial envelope image features, and (3) behavioral responses. We then used these CMs to examine the functions of different cortical areas. Behavioral categorization confusions can result from either visual representation errors or perceptual inference errors. Thus, if confusions in neural decoders (neural CMs) are driven by errors in image feature-based decoders (image-feature CMs), then any associated cortical activity is attributable to the visual representations. Conversely, if confusions in the perceived category (behavioral CMs) can explain errors in neural CMs, then any associated cortical activation could be attributed to errors in perceptual inference. Using multiple linear regression at each cortical vertex and each millisecond time bin to explain neural CMs as a function of image-feature CMs and behavioral CMs, we found that neural CMs within early visual cortices were explained primarily by image-feature CMs from 90-110 ms, whereas neural CMs within regions such as PRC, PHC, RSC, and OFC were explained primarily by behavioral CMs during 120-200 ms. Our results suggest that medial temporal areas and OFC actively infer visual percepts rather than passively representing categorical information. Meeting abstract presented at VSS 2015.

Collaboration


Dive into the Pavan Ramkumar's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hugo L. Fernandes

Rehabilitation Institute of Chicago

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel K. Wood

University of Western Ontario

View shared research outputs
Top Co-Authors

Avatar

Ari S. Benjamin

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge