2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) | 2019

A Weight Transfer Mechanism for Kernel Reinforcement Learning Decoding in Brain-Machine Interfaces

 
 

Abstract


Brain-Machine Interfaces (BMIs) aim to help disabled people brain control the external devices to finish a variety of movement tasks. The neural signals are decoded into the execution commands of the apparatus. However, most of the existing decoding algorithms in BMI are only trained for a single task. When facing a new task, even if it is similar to the previous one, the decoder needs to be re-trained from scratch, which is not efficient. Among the different types of decoders, reinforcement learning (RL) based algorithm has the advantage of adaptive training through trial-and-error over the recalibration used in supervised learning. But most of the RL algorithms in BMI do not actively leverage the acquired knowledge in the old task. In this paper, we propose a kernel RL algorithm with a weight transfer mechanism for new task learning. The existing neural patterns are clustered according to their similarities. A new pattern will be assigned with the weights that are transferred from the closest cluster. In this way, the most similar experiences from the previous task could be re-utilized in the new task to fasten the learning speed. The proposed algorithm is tested on synthetic neural data. Compared with the policy of re-training from scratch, the proposed weight transfer mechanism could maintain a significantly higher performance and achieve a faster learning speed on the new task.

Volume None
Pages 3547-3550
DOI 10.1109/EMBC.2019.8856555
Language English
Journal 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)

Full Text