2021 2nd International Conference on Control, Robotics and Intelligent System | 2021

Cross-modal Retrieval based on Big Transfer and Regional Maximum Activation of Convolutions with Generalized Attention

 
 

Abstract


Image-text retrieval is a challenge topic since image features are still not good enough to represent the high-level semantic information, though the representation ability is improved thanks to advances in deep learning. This paper proposes a cross-modal image-text retrieval framework (BiTGRMAC) based on big transfer and region maximum activation convolution with generalized attention, where big transfer (BiT) trained with large amount data is utilized to extract image features and fine-tuned on the cross-modal image datasets. At the same time, a new generalized attention region maximum activation convolution (GRMAC) descriptor is introduced into BiT, which can generate image features through attention mechanism, then reduce the influence of background clustering and highlight the target. For texts, the widely used Sentence CNN is adopted to extract text features. The parameters of image and text deep models are learned by minimizing a cross-modal loss function in an end-to-end framework. Experimental results show that this method can effectively improve the accuracy of retrieval on three widely used datasets.

Volume None
Pages None
DOI 10.1145/3483845.3483872
Language English
Journal 2021 2nd International Conference on Control, Robotics and Intelligent System

Full Text