Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where George L. Chadderdon is active.

Publication


Featured researches published by George L. Chadderdon.


IEEE Transactions on Neural Systems and Rehabilitation Engineering | 2012

Electrostimulation as a Prosthesis for Repair of Information Flow in a Computer Model of Neocortex

Cliff C. Kerr; Samuel A. Neymotin; George L. Chadderdon; Christopher T. Fietkiewicz; Joseph T. Francis; William W. Lytton

Damage to a cortical area reduces not only information transmitted to other cortical areas, but also activation of these areas. This phenomenon, whereby the dynamics of a follower area are dramatically altered, is typically manifested as a marked reduction in activity. Ideally, neuroprosthetic stimulation would replace both information and activation. However, replacement of activation alone may be valuable as a means of restoring dynamics and information processing of other signals in this multiplexing system. We used neuroprosthetic stimulation in a computer model of the cortex to repair activation dynamics, using a simple repetitive stimulation to replace the more complex, naturalistic stimulation that had been removed. We found that we were able to restore activity in terms of neuronal firing rates. Additionally, we were able to restore information processing, measured as a restoration of causality between an experimentally recorded signal fed into the in silico brain and a cortical output. These results indicate that even simple neuroprosthetics that do not restore lost information may nonetheless be effective in improving the functionality of surrounding areas of cortex.


Frontiers in Computational Neuroscience | 2013

Cortical information flow in Parkinson's disease: a composite network/field model

Cliff C. Kerr; Sacha J. van Albada; Samuel A. Neymotin; George L. Chadderdon; P. A. Robinson; William W. Lytton

The basal ganglia play a crucial role in the execution of movements, as demonstrated by the severe motor deficits that accompany Parkinsons disease (PD). Since motor commands originate in the cortex, an important question is how the basal ganglia influence cortical information flow, and how this influence becomes pathological in PD. To explore this, we developed a composite neuronal network/neural field model. The network model consisted of 4950 spiking neurons, divided into 15 excitatory and inhibitory cell populations in the thalamus and cortex. The field model consisted of the cortex, thalamus, striatum, subthalamic nucleus, and globus pallidus. Both models have been separately validated in previous work. Three field models were used: one with basal ganglia parameters based on data from healthy individuals, one based on data from individuals with PD, and one purely thalamocortical model. Spikes generated by these field models were then used to drive the network model. Compared to the network driven by the healthy model, the PD-driven network had lower firing rates, a shift in spectral power toward lower frequencies, and higher probability of bursting; each of these findings is consistent with empirical data on PD. In the healthy model, we found strong Granger causality between cortical layers in the beta and low gamma frequency bands, but this causality was largely absent in the PD model. In particular, the reduction in Granger causality from the main “input” layer of the cortex (layer 4) to the main “output” layer (layer 5) was pronounced. This may account for symptoms of PD that seem to reflect deficits in information flow, such as bradykinesia. In general, these results demonstrate that the brains large-scale oscillatory environment, represented here by the field model, strongly influences the information processing that occurs within its subnetworks. Hence, it may be preferable to drive spiking network models with physiologically realistic inputs rather than pure white noise.


PLOS ONE | 2012

Reinforcement learning of targeted movement in a spiking neuronal model of motor cortex.

George L. Chadderdon; Samuel A. Neymotin; Cliff C. Kerr; William W. Lytton

Sensorimotor control has traditionally been considered from a control theory perspective, without relation to neurobiology. In contrast, here we utilized a spiking-neuron model of motor cortex and trained it to perform a simple movement task, which consisted of rotating a single-joint “forearm” to a target. Learning was based on a reinforcement mechanism analogous to that of the dopamine system. This provided a global reward or punishment signal in response to decreasing or increasing distance from hand to target, respectively. Output was partially driven by Poisson motor babbling, creating stochastic movements that could then be shaped by learning. The virtual forearm consisted of a single segment rotated around an elbow joint, controlled by flexor and extensor muscles. The model consisted of 144 excitatory and 64 inhibitory event-based neurons, each with AMPA, NMDA, and GABA synapses. Proprioceptive cell input to this model encoded the 2 muscle lengths. Plasticity was only enabled in feedforward connections between input and output excitatory units, using spike-timing-dependent eligibility traces for synaptic credit or blame assignment. Learning resulted from a global 3-valued signal: reward (+1), no learning (0), or punishment (−1), corresponding to phasic increases, lack of change, or phasic decreases of dopaminergic cell firing, respectively. Successful learning only occurred when both reward and punishment were enabled. In this case, 5 target angles were learned successfully within 180 s of simulation time, with a median error of 8 degrees. Motor babbling allowed exploratory learning, but decreased the stability of the learned behavior, since the hand continued moving after reaching the target. Our model demonstrated that a global reinforcement signal, coupled with eligibility traces for synaptic plasticity, can train a spiking sensorimotor network to perform goal-directed motor behavior.


Pattern Recognition Letters | 2014

Towards a real-time interface between a biomimetic model of sensorimotor cortex and a robotic arm

Salvador Dura-Bernal; George L. Chadderdon; Samuel A. Neymotin; Joseph Francis; William W. Lytton

Brain-machine interfaces can greatly improve the performance of prosthetics. Utilizing biomimetic neuronal modeling in brain machine interfaces (BMI) offers the possibility of providing naturalistic motor-control algorithms for control of a robotic limb. This will allow finer control of a robot, while also giving us new tools to better understand the brains use of electrical signals. However, the biomimetic approach presents challenges in integrating technologies across multiple hardware and software platforms, so that the different components can communicate in real-time. We present the first steps in an ongoing effort to integrate a biomimetic spiking neuronal model of motor learning with a robotic arm. The biomimetic model (BMM) was used to drive a simple kinematic two-joint virtual arm in a motor task requiring trial-and-error convergence on a single target. We utilized the output of this model in real time to drive mirroring motion of a Barrett Technology WAM robotic arm through a user datagram protocol (UDP) interface. The robotic arm sent back information on its joint positions, which was then used by a visualization tool on the remote computer to display a realistic 3D virtual model of the moving robotic arm in real time. This work paves the way towards a full closed-loop biomimetic brain-effector system that can be incorporated in a neural decoder for prosthetic control, to be used as a platform for developing biomimetic learning algorithms for controlling real-time devices.


Neural Computation | 2014

Motor cortex microcircuit simulation based on brain activity mapping

George L. Chadderdon; Ashutosh Mohan; Benjamin A. Suter; Samuel A. Neymotin; Cliff C. Kerr; Joseph T. Francis; Gordon M. G. Shepherd; William W. Lytton

The deceptively simple laminar structure of neocortex belies the complexity of intra- and interlaminar connectivity. We developed a computational model based primarily on a unified set of brain activity mapping studies of mouse M1. The simulation consisted of 775 spiking neurons of 10 cell types with detailed population-to-population connectivity. Static analysis of connectivity with graph-theoretic tools revealed that the corticostriatal population showed strong centrality, suggesting that would provide a network hub. Subsequent dynamical analysis confirmed this observation, in addition to revealing network dynamics that cannot be readily predicted through analysis of the wiring diagram alone. Activation thresholds depended on the stimulated layer. Low stimulation produced transient activation, while stronger activation produced sustained oscillations where the threshold for sustained responses varied by layer: 13% in layer 2/3, 54% in layer 5A, 25% in layer 5B, and 17% in layer 6. The frequency and phase of the resulting oscillation also depended on stimulation layer. By demonstrating the effectiveness of combined static and dynamic analysis, our results show how static brain maps can be related to the results of brain activity mapping.


ieee signal processing in medicine and biology symposium | 2013

Virtual musculoskeletal arm and robotic arm driven by a biomimetic model of sensorimotor cortex with reinforcement learning

Salvador Dura-Bernal; George L. Chadderdon; Samuel A. Neymotin; Xianlian Zhou; Andrzej Przekwas; Joseph T. Francis; William W. Lytton

Neocortical mechanisms of learning sensorimotor control involve a complex series of interactions at multiple levels, from synaptic mechanisms to network connectomics. We developed a model of sensory and motor cortex consisting of several hundred spiking model-neurons. A biomimetic model (BMM) was trained using spike-timing dependent reinforcement learning to drive a simple kinematic two-joint virtual arm in a motor task requiring convergence on a single target. After learning, networks demonstrated retention of behaviorally-relevant memories by utilizing proprioceptive information to perform reach-to-target from multiple starting positions. We utilized the output of this model to drive mirroring motion of a robotic arm. In order to improve the biological realism of the motor control system, we replaced the simple virtual arm model with a realistic virtual musculoskeletal arm which was interposed between the BMM and the robot arm. The virtual musculoskeletal arm received input from the BMM signaling neural excitation for each muscle. It then fed back realistic proprioceptive information, including muscle fiber length and joint angles, which were employed in the reinforcement learning process. The limb position information was also used to control the robotic arm, leading to more realistic movements. This work explores the use of reinforcement learning in a spiking model of sensorimotor cortex and how this is affected by the bidirectional interaction with the kinematics and dynamic constraints of a realistic musculoskeletal arm model. It also paves the way towards a full closed-loop biomimetic brain-effector system that can be incorporated in a neural decoder for prosthetic control, and used for developing biomimetic learning algorithms for controlling real-time devices. Additionally, utilizing biomimetic neuronal modeling in brain-machine interfaces offers the possibility for finer control of prosthetics, and the ability to better understand the brain.


PLOS ONE | 2018

Optimization by Adaptive Stochastic Descent

Cliff C. Kerr; Salvador Dura-Bernal; Tomasz G Smolinski; George L. Chadderdon; David Wilson

When standard optimization methods fail to find a satisfactory solution for a parameter fitting problem, a tempting recourse is to adjust parameters manually. While tedious, this approach can be surprisingly powerful in terms of achieving optimal or near-optimal solutions. This paper outlines an optimization algorithm, Adaptive Stochastic Descent (ASD), that has been designed to replicate the essential aspects of manual parameter fitting in an automated way. Specifically, ASD uses simple principles to form probabilistic assumptions about (a) which parameters have the greatest effect on the objective function, and (b) optimal step sizes for each parameter. We show that for a certain class of optimization problems (namely, those with a moderate to large number of scalar parameter dimensions, especially if some dimensions are more important than others), ASD is capable of minimizing the objective function with far fewer function evaluations than classic optimization methods, such as the Nelder-Mead nonlinear simplex, Levenberg-Marquardt gradient descent, simulated annealing, and genetic algorithms. As a case study, we show that ASD outperforms standard algorithms when used to determine how resources should be allocated in order to minimize new HIV infections in Swaziland.


BMC Neuroscience | 2012

Reinforcement learning of 2-joint virtual arm reaching in motor cortex simulation

Samuel A. Neymotin; George L. Chadderdon; Cliff C. Kerr; Joseph T. Francis; William W. Lytton

Few attempts have been made to model learning of sensory-motor control using spiking neural units. We trained a 2-degree-of-freedom virtual arm to reach for a target using a spiking-neuron model of motor cortex that maps proprioceptive representations of limb position to motor commands and undergoes learning based on reinforcement mechanisms suggested by the dopaminergic reward system. A 2-layer model of layer 5 motor cortex (M1) passed motor commands to the virtual arm and received proprioceptive position information from it. The reinforcement algorithm trained synapses of M1 using reward (punishment) signals based on visual perception of decreasing (increasing) distance of the virtual hand from the target. Output M1 units were partially driven by noise, creating stochastic movements that were shaped to achieve desired outcomes. The virtual arm consisted of a shoulder joint, upper arm, elbow joint, and forearm. The upper- and forearm were each controlled by a pair of flexor/extensor muscles. These muscles received rotational commands from 192 output cells of the M1 model, while the M1 model received input from muscle-specific groups of sensory cells, each of which were tuned to fire over a range of muscle lengths. The M1 model had 384 excitatory and 192 inhibitory event-based integrate-and-fire neurons, with AMPA/NMDA and GABA synapses. Excitatory and inhibitory units were interconnected probabilistically. Plasticity was enabled in the feedforward connections between input and output excitatory units. Poisson noise was added to the output units for driving stochastic movements. The reinforcement learning (RL) algorithm used eligibility traces for synaptic credit/blame assignment, and a global signal (+1=reward, -1=punishment) corresponding to dopaminergic bursting/dipping. Eligibility traces were spike-timing-dependent, with pre-before-post spiking required. Reward (punishment) was delivered when the distance between the hand and target decreased (increased) [1]. RL learning occurred over 100 training sessions with the arm starting at 15 different initial positions. Each sub-session consisted of 15 s of RL training from a specific starting position. After training, the network was tested for its ability to reach the arm to target from each starting position, over the course of a 15 s trial. Compared to the naive network, the network post-training was able to reach the target from all starting positions. This was most clearly pronounced when the arm started at a large distance from the target. After reaching the target, the hand tended to oscillate around the target. Learning was most effective when recurrent connectivity in the output units was turned off or at low levels. Best overall performance was achieved with no recurrent connectivity and moderate maximal weights. Although learning typically increased average synaptic weight gains in the input-to-output M1 connections, there were frequent reductions in weights as well. Our model predicts that optimal motor performance is sensitive to perturbations in both strength and density of recurrent connectivity within motor cortex and that therefore the wiring of recurrent connectivity during development might be carefully regulated.


Neural Computation | 2013

Reinforcement learning of two-joint virtual arm reaching in a computer model of sensorimotor cortex

Samuel A. Neymotin; George L. Chadderdon; Cliff C. Kerr; Joseph T. Francis; William W. Lytton


PLOS ONE | 2013

Correction: Reinforcement Learning of Targeted Movement in a Spiking Neuronal Model of Motor Cortex

George L. Chadderdon; Samuel A. Neymotin; Cliff C. Kerr; William W. Lytton

Collaboration


Dive into the George L. Chadderdon's collaboration.

Top Co-Authors

Avatar

Samuel A. Neymotin

SUNY Downstate Medical Center

View shared research outputs
Top Co-Authors

Avatar

William W. Lytton

SUNY Downstate Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joseph T. Francis

SUNY Downstate Medical Center

View shared research outputs
Top Co-Authors

Avatar

Sacha J. van Albada

Allen Institute for Brain Science

View shared research outputs
Top Co-Authors

Avatar

Salvador Dura-Bernal

SUNY Downstate Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ashutosh Mohan

SUNY Downstate Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge