2021 IEEE Spoken Language Technology Workshop (SLT) | 2021

Low-Activity Supervised Convolutional Spiking Neural Networks Applied to Speech Commands Recognition

 
 
 

Abstract


Deep Neural Networks (DNNs) are the current state-of-the-art models in many speech related tasks. There is a growing interest, though, for more biologically realistic, hardware friendly and energy efficient models, named Spiking Neural Networks (SNNs). Recently, it has been shown that SNNs can be trained efficiently, in a supervised manner, using backpropagation with a surrogate gradient trick. In this work, we report speech command (SC) recognition experiments using supervised SNNs. We explored the Leaky-Integrate-Fire (LIF) neuron model for this task, and show that a model comprised of stacked dilated convolution spiking layers can reach an error rate very close to standard DNNs on the Google SC v1 dataset: 5.5%, while keeping a very sparse spiking activity, below 5%, thank to a new regularization term. We also show that modeling the leakage of the neuron membrane potential is useful, since the LIF model outperformed its non-leaky model counterpart significantly.

Volume None
Pages 97-103
DOI 10.1109/SLT48900.2021.9383587
Language English
Journal 2021 IEEE Spoken Language Technology Workshop (SLT)

Full Text