Appl. Intell. | 2021

Image generation and constrained two-stage feature fusion for person re-identification

 
 
 
 

Abstract


Generative adversarial network is widely used in person re-identification to expand data by generating auxiliary data. However, researchers all believe that using too much generated data in the training phase will reduce the accuracy of re-identification models. In this study, an improved generator and a constrained two-stage fusion network are proposed. A novel gesture discriminator embedded into the generator is used to calculate the completeness of skeleton pose images. The improved generator can make generated images more realistic, which would be conducive to feature extraction. The role of the constrained two-stage fusion network is to extract and utilize the real information of the generated images for person re-identification. Unlike previous studies, the fusion of shallow features is considered in this work. In detail, the proposed network has two branches based on the structure of ResNet50. One branch is for the fusion of images that are generated by the generated adversarial network, the other is applied to fuse the result of the first fusion and the original image. Experimental results show that our method outperforms most existing similar methods on Market-1501 and DukeMTMC-reID.

Volume 51
Pages 7679-7689
DOI 10.1007/S10489-021-02271-Z
Language English
Journal Appl. Intell.

Full Text