IEEE Transactions on Image Processing | 2021

Complementary, Heterogeneous and Adversarial Networks for Image-to-Image Translation

 
 
 
 
 
 

Abstract


Image-to-image translation is to transfer images from a source domain to a target domain. Conditional Generative Adversarial Networks (GANs) have enabled a variety of applications. Initial GANs typically conclude one single generator for generating a target image. Recently, using multiple generators has shown promising results in various tasks. However, generators in these works are typically of homogeneous architectures. In this paper, we argue that heterogeneous generators are complementary to each other and will benefit the generation of images. By heterogeneous, we mean that generators are of different architectures, focus on diverse positions, and perform over multiple scales. To this end, we build two generators by using a deep U-Net and a shallow residual network, respectively. The former concludes a series of down-sampling and up-sampling layers, which typically have large perception field and great spatial locality. In contrast, the residual network has small perceptual fields and works well in characterizing details, especially textures and local patterns. Afterwards, we use a gated fusion network to combine these two generators for producing a final output. The gated fusion unit automatically induces heterogeneous generators to focus on different positions and complement each other. Finally, we propose a novel approach to integrate multi-level and multi-scale features in the discriminator. This multi-layer integration discriminator encourages generators to produce realistic details from coarse to fine scales. We quantitatively and qualitatively evaluate our model on various benchmark datasets. Experimental results demonstrate that our method significantly improves the quality of transferred images, across a variety of image-to-image translation tasks. We have made our code and results publicly available: http://aiart.live/chan/.

Volume 30
Pages 3487-3498
DOI 10.1109/TIP.2021.3061286
Language English
Journal IEEE Transactions on Image Processing

Full Text