IEEE Transactions on Multimedia | 2021

Fast Adaptive Meta-Learning for Few-shot Image Generation

 
 
 

Abstract


Generative Adversarial Networks (GANs) are capable of effectively synthesising new realistic images and estimating the potential distribution of samples utilising adversarial learning. Nevertheless, conventional GANs require a large amount of training data samples to produce plausible results. Inspired by the capacity for humans to quickly learn new concepts from a small number of examples, several meta-learning approaches for the few-shot datasets are presented. However, most of meta-learning algorithms are designed to tackle few-shot classification and reinforcement learning tasks. Moreover, the existing meta-learning models for image generation are complex, thereby affecting the length of training time required. Fast Adaptive Meta-Learning (FAML) based on GAN and the encoder network is proposed in this study for few-shot image generation. This model demonstrates the capability to generate new realistic images from previously unseen target classes with only a small number of examples required. With 10 times faster convergence, FAML requires only one-fourth of the trainable parameters in comparison baseline models by training a simpler network with conditional feature vectors from the encoder, while increasing the number of generator iterations. The visualisation results are demonstrated in the paper. This model is able to improve few-shot image generation with the lowest FID score, highest IS, and comparable LPIPS to MNIST, Omniglot, VGG-Faces, and miniImageNet datasets.

Volume None
Pages 1-1
DOI 10.1109/TMM.2021.3077729
Language English
Journal IEEE Transactions on Multimedia

Full Text