2019 International Joint Conference on Neural Networks (IJCNN) | 2019

StepEncog: A Convolutional LSTM Autoencoder for Near-Perfect fMRI Encoding

 
 
 
 

Abstract


Learning a forward mapping that relates stimuli to the corresponding brain activation measured by functional magnetic resonance imaging (fMRI) is termed as estimating encoding models. Computational tractability usually forces current encoding as well as decoding solutions to typically consider only a small subset of voxels from the actual 3D volume of activation. Further, while reconstructing stimulus information from brain activation (brain decoding) has received wider attention, there have been only a few attempts at constructing encoding solutions in the extant neuro-imaging literature. In this paper, we present StepEncog, a convolutional LSTM autoencoder model trained on fMRI voxels. The model can predict the entire brain volume rather than a small subset of voxels, as presented in earlier research works. We argue that the resulting solution avoids the problem of devising encoding models based on a rule-based selection of informative voxels and the concomitant issue of wide spatial variability of such voxels across participants. The perturbation experiments indicate that the proposed deep encoder indeed learns to predict brain activations with high spatial accuracy. On challenging universal decoder imaging datasets, our model yielded encouraging results.

Volume None
Pages 1-8
DOI 10.1109/IJCNN.2019.8852339
Language English
Journal 2019 International Joint Conference on Neural Networks (IJCNN)

Full Text