Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Babak Mahmoudi is active.

Publication


Featured researches published by Babak Mahmoudi.


IEEE Transactions on Biomedical Engineering | 2009

Coadaptive Brain–Machine Interface via Reinforcement Learning

Jack DiGiovanna; Babak Mahmoudi; José A. B. Fortes; Jose C. Principe; Justin C. Sanchez

This paper introduces and demonstrates a novel brain-machine interface (BMI) architecture based on the concepts of reinforcement learning (RL), coadaptation, and shaping. RL allows the BMI control algorithm to learn to complete tasks from interactions with the environment, rather than an explicit training signal. Coadaption enables continuous, synergistic adaptation between the BMI control algorithm and BMI user working in changing environments. Shaping is designed to reduce the learning curve for BMI users attempting to control a prosthetic. Here, we present the theory and in vivo experimental paradigm to illustrate how this BMI learns to complete a reaching task using a prosthetic arm in a 3-D workspace based on the users neuronal activity. This semisupervised learning framework does not require user movements. We quantify BMI performance in closed-loop brain control over six to ten days for three rats as a function of increasing task difficulty. All three subjects coadapted with their BMI control algorithms to control the prosthetic significantly above chance at each level of difficulty.


PLOS ONE | 2014

Using reinforcement learning to provide stable brain-machine interface control despite neural input reorganization.

Eric A. Pohlmeyer; Babak Mahmoudi; Shijia Geng; Noeline W. Prins; Justin C. Sanchez

Brain-machine interface (BMI) systems give users direct neural control of robotic, communication, or functional electrical stimulation systems. As BMI systems begin transitioning from laboratory settings into activities of daily living, an important goal is to develop neural decoding algorithms that can be calibrated with a minimal burden on the user, provide stable control for long periods of time, and can be responsive to fluctuations in the decoder’s neural input space (e.g. neurons appearing or being lost amongst electrode recordings). These are significant challenges for static neural decoding algorithms that assume stationary input/output relationships. Here we use an actor-critic reinforcement learning architecture to provide an adaptive BMI controller that can successfully adapt to dramatic neural reorganizations, can maintain its performance over long time periods, and which does not require the user to produce specific kinetic or kinematic activities to calibrate the BMI. Two marmoset monkeys used the Reinforcement Learning BMI (RLBMI) to successfully control a robotic arm during a two-target reaching task. The RLBMI was initialized using random initial conditions, and it quickly learned to control the robot from brain states using only a binary evaluative feedback regarding whether previously chosen robot actions were good or bad. The RLBMI was able to maintain control over the system throughout sessions spanning multiple weeks. Furthermore, the RLBMI was able to quickly adapt and maintain control of the robot despite dramatic perturbations to the neural inputs, including a series of tests in which the neuron input space was deliberately halved or doubled.


Frontiers in Neuroengineering | 2014

Real-time in vivo optogenetic neuromodulation and multielectrode electrophysiologic recording with NeuroRighter.

Nealen G. Laxpati; Babak Mahmoudi; Claire-Anne Gutekunst; Jonathan P. Newman; Riley Zeller-Townson; Robert E. Gross

Optogenetic channels have greatly expanded neuroscience’s experimental capabilities, enabling precise genetic targeting and manipulation of neuron subpopulations in awake and behaving animals. However, many barriers to entry remain for this technology – including low-cost and effective hardware for combined optical stimulation and electrophysiologic recording. To address this, we adapted the open-source NeuroRighter multichannel electrophysiology platform for use in awake and behaving rodents in both open and closed-loop stimulation experiments. Here, we present these cost-effective adaptations, including commercially available LED light sources; custom-made optical ferrules; 3D printed ferrule hardware and software to calibrate and standardize output intensity; and modifications to commercially available electrode arrays enabling stimulation proximally and distally to the recording target. We then demonstrate the capabilities and versatility of these adaptations in several open and closed-loop experiments, demonstrate spectrographic methods of analyzing the results, as well as discuss artifacts of stimulation.


Journal of Neural Engineering | 2013

Towards autonomous neuroprosthetic control using Hebbian reinforcement learning

Babak Mahmoudi; Eric A. Pohlmeyer; Noeline W. Prins; Shijia Geng; Justin C. Sanchez

OBJECTIVE Our goal was to design an adaptive neuroprosthetic controller that could learn the mapping from neural states to prosthetic actions and automatically adjust adaptation using only a binary evaluative feedback as a measure of desirability/undesirability of performance. APPROACH Hebbian reinforcement learning (HRL) in a connectionist network was used for the design of the adaptive controller. The method combines the efficiency of supervised learning with the generality of reinforcement learning. The convergence properties of this approach were studied using both closed-loop control simulations and open-loop simulations that used primate neural data from robot-assisted reaching tasks. MAIN RESULTS The HRL controller was able to perform classification and regression tasks using its episodic and sequential learning modes, respectively. In our experiments, the HRL controller quickly achieved convergence to an effective control policy, followed by robust performance. The controller also automatically stopped adapting the parameters after converging to a satisfactory control policy. Additionally, when the input neural vector was reorganized, the controller resumed adaptation to maintain performance. SIGNIFICANCE By estimating an evaluative feedback directly from the user, the HRL control algorithm may provide an efficient method for autonomous adaptation of neuroprosthetic systems. This method may enable the user to teach the controller the desired behavior using only a simple feedback signal.


international ieee/embs conference on neural engineering | 2011

Control of a center-out reaching task using a reinforcement learning Brain-Machine Interface

Justin C. Sanchez; Aditya Tarigoppula; John S. Choi; Brandi T. Marsh; Pratik Y. Chhatbar; Babak Mahmoudi; Joseph T. Francis

In this work, we develop an experimental primate test bed for a center-out reaching task to test the performance of reinforcement learning based decoders for Brain-Machine Interfaces. Neural recordings obtained from the primary motor cortex were used to adapt a decoder using only sequences of neuronal activation and reinforced interaction with the environment. From a naïve state, the system was able to achieve 100% of the targets without any a priori knowledge of the correct neural-to-motor mapping. Results show that the coupling of motor and reward information in an adaptive BMI decoder has the potential to create more realistic and functional models necessary for future BMI control.


international conference on conceptual structures | 2007

Towards Real-Time Distributed Signal Modeling for Brain-Machine Interfaces

Jack DiGiovanna; Loris Marchal; Prapaporn Rattanatamrong; Ming Zhao; Shalom Darmanjian; Babak Mahmoudi; Justin C. Sanchez; Jose C. Principe; Linda Hermer-Vazquez; Renato J. O. Figueiredo; José A. B. Fortes

New architectures for Brain-Machine Interface communication and control use mixture models for expanding rehabilitation capabilities of disabled patients. Here we present and test a dynamic data-driven (BMI) Brain-Machine Interface architecture that relies on multiple pairs of forward-inverse models to predict, control, and learn the trajectories of a robotic arm in a real-time closed-loop system. A method of window-RLS was used to compute the forward-inverse model pairs in real-time and a model switching mechanism based on reinforcement learning was used to test the ability to map neural activity to elementary behaviors. The architectures were tested with in vivodata and implemented using remote computing resources.


IEEE Sensors Journal | 2016

An Inductively-Powered Wireless Neural Recording System With a Charge Sampling Analog Front-End

Seung Bae Lee; Byunghun Lee; Mehdi Kiani; Babak Mahmoudi; Robert E. Gross; Maysam Ghovanloo

An inductively-powered wireless integrated neural recording system (WINeR-7) is presented for wireless and battery-less neural recording from freely-behaving animal subjects inside a wirelessly powered standard homecage. The WINeR-7 system employs a novel wide-swing dual slope charge sampling (DSCS) analog front-end (AFE) architecture, which performs amplification, filtering, sampling, and analog-to-time conversion with minimal interference and small amount of power. The output of the DSCS-AFE produces a pseudodigital pulsewidth modulated (PWM) signal. A circular shift register timedivision multiplexes (TDM) the PWM pulses to create a TDMPWM signal, which is fed into an on-chip 915-MHz transmitter (Tx). The AFE and Tx are supplied at 1.8 and 4.2 V, respectively, by a power management block, which includes a high efficiency active rectifier and automatic resonance tuning, operating at 13.56 MHz. The eight-channel system-on-a-chip was fabricated in a 0.35-μm CMOS process, occupying 5×2.5 mm2 and consumed 51.4 mW. For each channel, the sampling rate is 21.48 kHz and the power consumption is 19.3 μW. In vivo experiments were conducted on freely-behaving rats in an energized homecage by continuously delivering 51.4 mW to the WINeR-7 system in a closed-loop fashion and recording local field potentials.


Frontiers in Neuroengineering | 2009

Cyber-Workstation for Computational Neuroscience

Jack DiGiovanna; Prapaporn Rattanatamrong; Ming Zhao; Babak Mahmoudi; Linda Hermer; Renato J. O. Figueiredo; Jose C. Principe; José A. B. Fortes; Justin C. Sanchez

A Cyber-Workstation (CW) to study in vivo, real-time interactions between computational models and large-scale brain subsystems during behavioral experiments has been designed and implemented. The design philosophy seeks to directly link the in vivo neurophysiology laboratory with scalable computing resources to enable more sophisticated computational neuroscience investigation. The architecture designed here allows scientists to develop new models and integrate them with existing models (e.g. recursive least-squares regressor) by specifying appropriate connections in a block-diagram. Then, adaptive middleware transparently implements these user specifications using the full power of remote grid-computing hardware. In effect, the middleware deploys an on-demand and flexible neuroscience research test-bed to provide the neurophysiology laboratory extensive computational power from an outside source. The CW consolidates distributed software and hardware resources to support time-critical and/or resource-demanding computing during data collection from behaving animals. This power and flexibility is important as experimental and theoretical neuroscience evolves based on insights gained from data-intensive experiments, new technologies and engineering methodologies. This paper describes briefly the computational infrastructure and its most relevant components. Each component is discussed within a systematic process of setting up an in vivo, neuroscience experiment. Furthermore, a co-adaptive brain machine interface is implemented on the CW to illustrate how this integrated computational and experimental platform can be used to study systems neurophysiology and learning in a behavior task. We believe this implementation is also the first remote execution and adaptation of a brain-machine interface.


international conference of the ieee engineering in medicine and biology society | 2013

Feature extraction and unsupervised classification of neural population reward signals for reinforcement based BMI

Noeline W. Prins; Shijia Geng; Eric A. Pohlmeyer; Babak Mahmoudi; Justin C. Sanchez

New reinforcement based paradigms for building adaptive decoders for Brain-Machine Interfaces involve using feedback directly from the brain. In this work, we investigated neuromodulation in the Nucleus Accumbens (reward center) during a multi-target reaching task and investigated how to extract a reinforcing or non-reinforcing signal that could be used to adapt a BMI decoder. One of the challenges in brain-driven adaptation is how to translate biological neuromodulation into a single binary signal from the distributed representation of the neural population, which may encode many aspects of reward. To extract these signals, feature analysis and clustering were used to identify timing and coding properties of a users neuromodulation related to reward perception. First, Principal Component Analysis (PCA) of reward related neural signals was used to extract variance in the firing and the optimum time correlation between the neural signal and the reward phase of the task. Next, k-means clustering was used to separate data into two classes.


international conference of the ieee engineering in medicine and biology society | 2008

BMI cyberworkstation: Enabling dynamic data-driven brain-machine interface research through cyberinfrastructure

Ming Zhao; Prapaporn Rattanatamrong; Jack DiGiovanna; Babak Mahmoudi; Renato J. O. Figueiredo; Justin C. Sanchez; Jose C. Principe; José A. B. Fortes

Dynamic data-driven brain-machine interfaces (DDDBMI) have great potential to advance the understanding of neural systems and improve the design of brain-inspired rehabilitative systems. This paper presents a novel cyberinfrastructure that couples in vivo neurophysiology experimentation with massive computational resources to provide seamless and efficient support of DDDBMI research. Closed-loop experiments can be conducted with in vivo data acquisition, reliable network transfer, parallel model computation, and real-time robot control. Behavioral experiments with live animals are supported with real-time guarantees. Offline studies can be performed with various configurations for extensive analysis and training. A Web-based portal is also provided to allow users to conveniently interact with the cyberinfrastructure, conducting both experimentation and analysis. New motor control models are developed based on this approach, which include recursive least square based (RLS) and reinforcement learning based (RLBMI) algorithms. The results from an online RLBMI experiment shows that the cyberinfrastructure can successfully support DDDBMI experiments and meet the desired real-time requirements.

Collaboration


Dive into the Babak Mahmoudi's collaboration.

Top Co-Authors

Avatar

Justin C. Sanchez

SUNY Downstate Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jack DiGiovanna

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ming Zhao

Arizona State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge