Computer Vision and Image Understanding | 2021

Handling new target classes in semantic segmentation with domain adaptation

 
 
 
 

Abstract


Abstract In this work, we define and address a novel domain adaptation (DA) problem in semantic scene segmentation, where the target domain not only exhibits a data distribution shift w.r.t. the source domain, but also includes novel classes that do not exist in the latter. Different to “open-set” (Panareda Busto and Gall, 2017) and “universal domain adaptation”\xa0(You et\xa0al. 2019), which both regard all objects from new classes as “unknown”, we aim at explicit test-time prediction for these new classes. To reach this goal, we propose a framework that leverages domain adaptation and zero-shot learning techniques to enable “boundless” adaptation in the target domain. It relies on a novel architecture, along with a dedicated learning scheme, to bridge the source-target domain gap while learning how to map new classes’\xa0labels to relevant visual representations. The performance is further improved using self-training on target-domain pseudo-labels. For validation, we consider different domain adaptation set-ups, namely synthetic-2-real, country-2-country and dataset-2-dataset. Our framework outperforms the baselines by significant margins, setting competitive standards on all benchmarks for the new task. Code and models are available at: https://github.com/valeoai/buda .

Volume None
Pages 103258
DOI 10.1016/J.CVIU.2021.103258
Language English
Journal Computer Vision and Image Understanding

Full Text