Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joseph T. Francis is active.

Publication


Featured researches published by Joseph T. Francis.


PLOS ONE | 2013

Use of Frontal Lobe Hemodynamics as Reinforcement Signals to an Adaptive Controller

Marcello M. DiStasio; Joseph T. Francis

Decision-making ability in the frontal lobe (among other brain structures) relies on the assignment of value to states of the animal and its environment. Then higher valued states can be pursued and lower (or negative) valued states avoided. The same principle forms the basis for computational reinforcement learning controllers, which have been fruitfully applied both as models of value estimation in the brain, and as artificial controllers in their own right. This work shows how state desirability signals decoded from frontal lobe hemodynamics, as measured with near-infrared spectroscopy (NIRS), can be applied as reinforcers to an adaptable artificial learning agent in order to guide its acquisition of skills. A set of experiments carried out on an alert macaque demonstrate that both oxy- and deoxyhemoglobin concentrations in the frontal lobe show differences in response to both primarily and secondarily desirable (versus undesirable) stimuli. This difference allows a NIRS signal classifier to serve successfully as a reinforcer for an adaptive controller performing a virtual tool-retrieval task. The agents adaptability allows its performance to exceed the limits of the NIRS classifier decoding accuracy. We also show that decoding state desirabilities is more accurate when using relative concentrations of both oxyhemoglobin and deoxyhemoglobin, rather than either species alone.


Frontiers in Neuroscience | 2016

Restoring Behavior via Inverse Neurocontroller in a Lesioned Cortical Spiking Model Driving a Virtual Arm

Salvador Dura-Bernal; Kan Li; Samuel A. Neymotin; Joseph T. Francis; Jose C. Principe; William W. Lytton

Neural stimulation can be used as a tool to elicit natural sensations or behaviors by modulating neural activity. This can be potentially used to mitigate the damage of brain lesions or neural disorders. However, in order to obtain the optimal stimulation sequences, it is necessary to develop neural control methods, for example by constructing an inverse model of the target system. For real brains, this can be very challenging, and often unfeasible, as it requires repeatedly stimulating the neural system to obtain enough probing data, and depends on an unwarranted assumption of stationarity. By contrast, detailed brain simulations may provide an alternative testbed for understanding the interactions between ongoing neural activity and external stimulation. Unlike real brains, the artificial system can be probed extensively and precisely, and detailed output information is readily available. Here we employed a spiking network model of sensorimotor cortex trained to drive a realistic virtual musculoskeletal arm to reach a target. The network was then perturbed, in order to simulate a lesion, by either silencing neurons or removing synaptic connections. All lesions led to significant behvaioral impairments during the reaching task. The remaining cells were then systematically probed with a set of single and multiple-cell stimulations, and results were used to build an inverse model of the neural system. The inverse model was constructed using a kernel adaptive filtering method, and was used to predict the neural stimulation pattern required to recover the pre-lesion neural activity. Applying the derived neurostimulation to the lesioned network improved the reaching behavior performance. This work proposes a novel neurocontrol method, and provides theoretical groundwork on the use biomimetic brain models to develop and evaluate neurocontrollers that restore the function of damaged brain regions and the corresponding motor behaviors.


international conference of the ieee engineering in medicine and biology society | 2011

Optimizing microstimulation using a reinforcement learning framework

Austin J. Brockmeier; John S. Choi; Marcello M. DiStasio; Joseph T. Francis; Jose C. Principe

The ability to provide sensory feedback is desired to enhance the functionality of neuroprosthetics. Somatosensory feedback provides closed-loop control to the motor system, which is lacking in feedforward neuroprosthetics. In the case of existing somatosensory function, a template of the natural response can be used as a template of desired response elicited by electrical microstimulation. In the case of no initial training data, microstimulation parameters that produce responses close to the template must be selected in an online manner. We propose using reinforcement learning as a framework to balance the exploration of the parameter space and the continued selection of promising parameters for further stimulation. This approach avoids an explicit model of the neural response from stimulation. We explore a preliminary architecture — treating the task as a k-armed bandit — using offline data recorded for natural touch and thalamic microstimulation, and we examine the methods efficiency in exploring the parameter space while concentrating on promising parameter forms. The best matching stimulation parameters, from k = 68 different forms, are selected by the reinforcement learning algorithm consistently after 334 realizations.


Journal of Neural Engineering | 2016

Eliciting naturalistic cortical responses with a sensory prosthesis via optimized microstimulation

John S. Choi; Austin J. Brockmeier; David B McNiel; Lee M. von Kraus; Jose C. Principe; Joseph T. Francis

OBJECTIVE Lost sensations, such as touch, could one day be restored by electrical stimulation along the sensory neural pathways. Such stimulation, when informed by electronic sensors, could provide naturalistic cutaneous and proprioceptive feedback to the user. Perceptually, microstimulation of somatosensory brain regions produces localized, modality-specific sensations, and several spatiotemporal parameters have been studied for their discernibility. However, systematic methods for encoding a wide array of naturally occurring stimuli into biomimetic percepts via multi-channel microstimulation are lacking. More specifically, generating spatiotemporal patterns for explicitly evoking naturalistic neural activation has not yet been explored. APPROACH We address this problem by first modeling the dynamical input-output relationship between multichannel microstimulation and downstream neural responses, and then optimizing the input pattern to reproduce naturally occurring touch responses as closely as possible. MAIN RESULTS Here we show that such optimization produces responses in the S1 cortex of the anesthetized rat that are highly similar to natural, tactile-stimulus-evoked counterparts. Furthermore, information on both pressure and location of the touch stimulus was found to be highly preserved. SIGNIFICANCE Our results suggest that the currently presented stimulus optimization approach holds great promise for restoring naturalistic levels of sensation.


bioRxiv | 2018

Near Perfect Neural Critic from Motor Cortical Activity Toward an Autonomously Updating Brain Machine Interface

Junmo An; Taruna Yadav; Mohammad Badri Ahmadi; Venkata S Aditya Tarigoppula; Joseph T. Francis

We are developing an autonomously updating brain machine interface (BMI) utilizing reinforcement learning principles. One aspect of this system is a neural critic that determines reward expectations from neural activity. This critic is then used to update a BMI decoder towards an improved performance from the user’s perspective. Here we demonstrate the ability of a neural critic to classify trial reward value given activity from the primary motor cortex (M1), using neural features from single/multi units (SU/MU), and local field potentials (LFPs) with prediction accuracies up to 97% correct. A nonhuman primate subject conducted a cued center out reaching task, either manually, or observationally. The cue indicated the reward value of a trial. Features such as power spectral density (PSD) of the LFPs and spike-field coherence (SFC) between SU/MU and corresponding LFPs were calculated and used as inputs to several classifiers. We conclude that hybrid features of PSD and SFC show higher classification performance than PSD or SFC alone (accuracy was 92% for manual tasks, and 97% for observational). In the future, we will employ these hybrid features towards our autonomously updating BMI.


bioRxiv | 2018

Motor Cortex Encodes A Value Function Consistent With Reinforcement Learning

Venkata S Aditya Tarigoppula; John S. Choi; Jack H Hessburg; David B McNiel; Brandy T Marsh; Joseph T. Francis

Temporal difference reinforcement learning (TDRL) accurately models associative learning observed in animals, where they learn to associate outcome predicting environmental states, termed conditioned stimuli (CS), with the value of outcomes, such as rewards, termed unconditioned stimuli (US). A component of TDRL is the value function, which captures the expected cumulative future reward from a given state. The value function can be modified by changes in the animal’s knowledge, such as by the predictability of its environment. Here we show that primary motor cortical (M1) neurodynamics reflect a TD learning process, encoding a state value function and reward prediction error in line with TDRL. M1 responds to the delivery of reward, and shifts its value related response earlier in a trial, becoming predictive of an expected reward, when reward is predictable due to a CS. This is observed in tasks performed manually or observed passively, as well as in tasks without an explicit CS predicting reward, but simply with a predictable temporal structure, that is a predictable environment. M1 also encodes the expected reward value associated with a set of CS in a multiple reward level CS-US task. Here we extend the Microstimulus TDRL model, reported to accurately capture RL related dopaminergic activity, to account for M1 reward related neural activity in a multitude of tasks. Significance statement There is a great deal of agreement between aspects of temporal difference reinforcement learning (TDRL) models and neural activity in dopaminergic brain centers. Dopamine is know to be necessary for sensorimotor learning induced synaptic plasticity in the motor cortex (M1), and thus one might expect to see the hallmarks of TDRL in M1, which we show here in the form of a state value function and reward prediction error during. We see these hallmarks even when a conditioned stimulus is not available, but the environment is predictable, during manual tasks with agency, as well as observational tasks without agency. This information has implications towards autonomously updating brain machine interfaces as others and we have proposed and published on.Temporal difference reinforcement learning (TDRL) accurately models associative learning observed in animals where they learn to predict the reward value of an unconditioned stimulus (US) based on a conditioned stimulus (CS), such as in classical conditioning. A key component of TDRL is the value function, which captures the expected temporally discounted reward from a given state. The value function can also be modified by the animals knowledge and certainty of its environment. Here we show that not only do primary motor cortex (M1) neurodynamics reflect a TD learning process, but M1 also encodes a value function in line with TDRL. M1 responds to the delivery of an unpredictable reward, and shifts its value related response earlier in a trial, becoming predictive of an expected reward, when reward is predictable, such as when a CS acts as a cue predicting the upcoming reward. This is observed in tasks performed manually or observed passively, as well as in tasks with explicit CS predicting reward, or simply with a predictable temporal task structure, that is a predictable environment. M1 also encodes the expected reward value associated with a CS in a multiple reward level CS-US task. The Microstimulus TD model, reported to accurately capture RL related dopaminergic activity, extends to account for M1 reward related neural activity in a multitude of tasks.


international conference of the ieee engineering in medicine and biology society | 2016

Reward value is encoded in primary somatosensory cortex and can be decoded from neural activity during performance of a psychophysical task

David B McNiel; John S. Choi; John Hessburg; Joseph T. Francis

Encoding of reward valence has been shown in various brain regions, including deep structures such as the substantia nigra as well as cortical structures such as the orbitofrontal cortex. While the correlation between these signals and reward valence have been shown in aggregated data comprised of many trials, little work has been done investigating the feasibility of decoding reward valence on a single trial basis. Towards this goal, one non-human primate (macaca radiata) was trained to grip and hold a target level of force in order to earn zero, one, two, or three juice rewards. The animal was informed of the impending result before reward delivery by means of a visual cue. Neural data was recorded from primary somatosensory cortex (S1) during these experiments and firing rate histograms were created following the appearance of the visual cue and used as input to a variety of classifiers. Reward valence was decoded with high levels of accuracy from S1 both in the post-cue and post-reward periods. Additionally, the proportion of units showing significant changes in their firing rates was influenced in a predictable way based on reward valence. The existence of a signal within S1 cortex that encodes reward valence could have utility for implementing reinforcement learning algorithms for brain machine interfaces. The ability to decode this reward signal in real time with limited data is paramount to the usability of such a signal in practical applications.


2016 32nd Southern Biomedical Engineering Conference (SBEC) | 2016

Classifier Performance in Primary Somatosensory Cortex Towards Implementation of a Reinforcement Learning Based Brain Machine Interface

David McNiel; Mohammad Bataineh; John S. Choi; John Hessburg; Joseph T. Francis

Increasingly accurate control of prosthetic limbs has been made possible by a series of advancements in brain machine interface (BMI) control theory. One promising control technique for future BMI applications is reinforcement learning (RL). RL based BMIs require a reinforcing signal to inform the controller whether or not a given movement was intended by the user. This signal has been shown to exist in cortical structures simultaneously used for BMI control. This work evaluates the ability of several common classifiers to detect impending reward delivery within primary somatosensory (S1) cortex during a grip force match to sample task performed by a nonhuman primate. The accuracy of these classifiers was further evaluated over a range of conditions to identify parameters that provide maximum classification accuracy. S1 cortex was found to provide highly accurate classification of the reinforcement signal across many classifiers and a wide variety of data input parameters. The classification accuracy in S1 cortex between rewarding and non-rewarding trials was apparent when the animal was expecting an impending delivery or an impending withholding of reward following trial completion. The high accuracy of classification in S1 cortex can be used to adapt an RL based BMI towards a users intent. Real-time implementation of these classifiers in an RL based BMI could be used to adapt control of a prosthesis dynamically to match the intent of its user.


iScience | 2018

Persistent Increases of PKMζ in Sensorimotor Cortex Maintain Procedural Long-Term Memory Storage

Peng Penny Gao; Jeffrey H. Goodman; Todd Charlton Sacktor; Joseph T. Francis

Summary Procedural motor learning and memory are accompanied by changes in synaptic plasticity, neural dynamics, and synaptogenesis. Missing is information on the spatiotemporal dynamics of the molecular machinery maintaining these changes. Here we examine whether persistent increases in PKMζ, an atypical protein kinase C (PKC) isoform, store long-term memory for a reaching task in rat sensorimotor cortex that could reveal the sites of procedural memory storage. Specifically, perturbing PKMζ synthesis (via antisense oligodeoxynucleotides) and blocking atypical PKC activity (via zeta inhibitory peptide [ZIP]) in S1/M1 disrupts and erases long-term motor memory maintenance, indicating atypical PKCs and specifically PKMζ store consolidated long-term procedural memories. Immunostaining reveals that PKMζ increases in S1/M1 layers II/III and V as performance improved to an asymptote. After storage for 1 month without reinforcement, the increase in M1 layer V persists without decrement. Thus, the persistent increases in PKMζ that store long-term procedural memory are localized to the descending output layer of the primary motor cortex.


Frontiers in Neuroscience | 2018

Paradigm Shift in Sensorimotor Control Research and Brain Machine Interface Control: The Influence of Context on Sensorimotor Representations

Yao Zhao; John Hessburg; Jaganth Nivas Asok Kumar; Joseph T. Francis

Neural activity in the primary motor cortex (M1) is known to correlate with movement related variables including kinematics and dynamics. Our recent work, which we believe is part of a paradigm shift in sensorimotor research, has shown that in addition to these movement related variables, activity in M1 and the primary somatosensory cortex (S1) are also modulated by context, such as value, during both active movement and movement observation. Here we expand on the investigation of reward modulation in M1, showing that reward level changes the neural tuning function of M1 units to both kinematic as well as dynamic related variables. In addition, we show that this reward-modulated activity is present during brain machine interface (BMI) control. We suggest that by taking into account these context dependencies of M1 modulation, we can produce more robust BMIs. Toward this goal, we demonstrate that we can classify reward expectation from M1 on a movement-by-movement basis under BMI control and use this to gate multiple linear BMI decoders toward improved offline performance. These findings demonstrate that it is possible and meaningful to design a more accurate BMI decoder that takes reward and context into consideration. Our next step in this development will be to incorporate this gating system, or a continuous variant of it, into online BMI performance.

Collaboration


Dive into the Joseph T. Francis's collaboration.

Top Co-Authors

Avatar

John S. Choi

SUNY Downstate Medical Center

View shared research outputs
Top Co-Authors

Avatar

David B McNiel

SUNY Downstate Medical Center

View shared research outputs
Top Co-Authors

Avatar

John Hessburg

SUNY Downstate Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marcello M. DiStasio

SUNY Downstate Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aditya Tarigoppula

SUNY Downstate Medical Center

View shared research outputs
Top Co-Authors

Avatar

Brandi T. Marsh

SUNY Downstate Medical Center

View shared research outputs
Researchain Logo
Decentralizing Knowledge