Inf. Process. Manag. | 2021

Multi-level similarity learning for image-text retrieval

 
 
 
 
 

Abstract


Abstract Image-text retrieval task has been a popular research topic and attracts a growing interest due to it bridges computer vision and natural language processing communities and involves two different modalities. Although a lot of methods have made a great progress in image-text task, it remains challenging because of the difficulty to learn the correspondence between two heterogeneous modalities. In this paper, we propose a multi-level representation learning for image-text retrieval task, which utilizes semantic-level, structural-level and contextual information to improve the quality of visual and textual representation. To utilize semantic-level information, we firstly extract the nouns, adjectives and number with high frequency as the semantic labels and adopt multi-label convolutional neural network framework to encode the semantic-level information. To explore the structure-level information of image-text pair, we firstly construct two graphs to encode the visual and textual information with respect to the corresponding modality and then, we apply graph matching with triplet loss to reduce the cross-modality discrepancy. To further improve the retrieval results, we utilize the contextual-level information from two modalities to refine the rank list and enhance the retrieval quality. Extensive experiments on Flickr30k and MSCOCO, which are two commonly datasets for image-text retrieval, have demonstrated the superiority of our proposed method.

Volume 58
Pages 102432
DOI 10.1016/j.ipm.2020.102432
Language English
Journal Inf. Process. Manag.

Full Text