Neurocomputing | 2021

Context-sensitive zero-shot semantic segmentation model based on meta-learning

 
 
 
 

Abstract


Abstract The zero-shot semantic segmentation requires models with a strong image understanding ability. The majority of current solutions are based on direct mapping or generation. These schemes are effective in dealing with the zero-shot recognition, but they cannot fully transfer the visual dependence between objects in more complex scenarios of semantic segmentation. More importantly, the predicted results become seriously biased to the seen-category in the training set, which makes it difficult to accurately recognize the unseen-category. In view of the above two problems, we propose a novel zero-shot semantic segmentation model based on meta-learning. It is observed that the pure semantic space expression has certain limitations for the zero-shot learning. Therefore, based on the original semantic migration, we first migrate the shared information in the visual space by adding a context-module, and then migrate it in the visual and semantic dual space. At the same time, in order to solve the problem of biasness, we improve the adaptability of the model parameters by adjusting the parameters of the dual-space through the meta-learning, so that it can successfully complete the segmentation even in the face of new categories without reference samples. Experiments show that our algorithm outperforms the existing best methods in the zero-shot segmentation on three datasets of Pascal-VOC 2012, Pascal-Context and Coco-stuff.

Volume None
Pages None
DOI 10.1016/j.neucom.2021.08.120
Language English
Journal Neurocomputing

Full Text