Neurocomputing | 2019

Fast and robust learning in Spiking Feed-forward Neural Networks based on Intrinsic Plasticity mechanism

 
 
 
 

Abstract


Abstract In this paper, the computational performance of a Spiking Feed-forward Neural Network (SFNN) is investigated based on a brain-inspired Intrinsic Plasticity (IP) mechanism, which is a membrane potential adaptive tuning scheme used to change the intrinsic excitability of individual neuron. This learning rule has the ability of regulating neural activity in a relative homeostatic level even if the external input of a neuron is extremely low or extremely high. The effectiveness of IP on SFNN model has been studied and evaluated through the MNIST handwritten digits classification. The training of network weights starts from a conventional artificial neural network by backpropagation and then the rate-based neurons are transformed into spiking neuron models with IP learning. Our results show that both over-activation and under-activation of neuronal response which commonly exist during the computation of neural networks can be effectively avoided. Without loss of accuracy, the calculation speed of SFNN with IP learning is extremely higher than that of the other models. Besides, when the input intensity and data noise are taken into account, both of the learning speed and accuracy of the model can be greatly improved by the application of IP learning. This biologically inspired SFNN model is simple and effective which may give insights for the optimization of neural computation.

Volume 365
Pages 102-112
DOI 10.1016/J.NEUCOM.2019.07.009
Language English
Journal Neurocomputing

Full Text