2021 International Conference on INnovations in Intelligent SysTems and Applications (INISTA) | 2021

Simple But Effective GRU Variants

 
 

Abstract


Recurrent Neural Network (RNN) is a widely used deep learning architecture applied to sequence learning problems. However, it is recognized that RNNs suffer from exploding and vanishing gradient problems that prohibit the early layers of the network from learning the gradient information. GRU networks are particular kinds of recurrent networks that reduce the short-comings of these problems. In this study, we propose two variants of the standard GRU with simple but effective modifications. We applied an empirical approach and tried to determine the effectiveness of the current units and recurrent units of gates by giving different coefficients. Interestingly, we realize that applying such minor and simple changes to the standard GRU provides notable improvements. We comparatively evaluate the standard GRU with the proposed two variants on four different tasks: (1) sentiment classification on the IMDB movie review dataset, (2) language modeling task on Penn TreeBank (PTB) dataset, (3) sequence to sequence addition problem, and (4) question answering problem on Facebook’s bAbitasks dataset. The evaluation results indicate that the proposed two variants of GRU consistently outperform standard GRU.

Volume None
Pages 1-6
DOI 10.1109/INISTA52262.2021.9548535
Language English
Journal 2021 International Conference on INnovations in Intelligent SysTems and Applications (INISTA)

Full Text