2021 IEEE Winter Conference on Applications of Computer Vision (WACV) | 2021

TranstextNet: Transducing Text for Recognizing Unseen Visual Relationships

 
 

Abstract


An important challenge in visual scene understanding is the recognition of interactions between objects in an image. This task – often called visual relationship detection (VRD) – must be solved to enable higher understanding of the semantic content in images. VRD can become particularly hard where there is severe statistical sparsity of some potentially involved objects, and the number of many relationships in standard training sets is limited. In this paper we show how to transduce auxiliary text so as to enable recognition of relationships absent in the visual training data. This transduction is performed by learning a shared relationship representation for both the textual and visual information. The proposed approach is model-agnostic and can be used as a plug-in module in existing VRD and scene graph generation (SGG) recognition systems to improve their performance and extend their capabilities. We consider the application of our technique using three widely accepted SGG models [20], [24], [16], and different auxiliary text sources: image captions, text generated by a deep text generation model (GPT-2), and ebooks from the Gutenberg Project. We conduct an extensive empirical study of both the VRD and SGG tasks over large-scale benchmark datasets. Our method is the first to enable recognition of visual relationships missing in the visual training data and appearing only in the auxiliary text. We conclusively show that text ingestion enables recognition of unseen visual relationships, and moreover, advances the state-of-the-art in all SGG tasks.

Volume None
Pages 1954-1963
DOI 10.1109/WACV48630.2021.00200
Language English
Journal 2021 IEEE Winter Conference on Applications of Computer Vision (WACV)

Full Text