Neural Processing Letters | 2019

Inferring Personality Traits from Attentive Regions of User Liked Images Via Weakly Supervised Dual Convolutional Network

 
 
 
 

Abstract


In social media, users usually unconsciously their preferences on images, which can be considered as the personal cues for inferring their personality traits. Existing methods map the holistic image features into personality traits. However, users’ attention on their liked images is typically localized, which should be taken into account in modeling personality traits. In this paper, we propose an end-to-end weakly supervised dual convolutional network (WSDCN) for personality prediction, which consists of a classification network and a regression network. The classification network captures personality class-specific attentive image regions while only requiring the image-level personality class labels. The regression network is used for predicting personality traits. Firstly, the users’ Big-Five (BF) traits are converted into ten personality class labels for their liked images. Secondly, the Multi-Personality Class Activation Map (MPCAM) is generated based on the classification network and utilized as the localized activation to produce local deep features, which are then combined with the holistic deep features for the regression task. Finally, the user liked images and the associated personality traits are used to train the end-to-end WSDCN model. The proposed method is able to predict the BF personality traits simultaneously by training the WSDCN network only once. Experimental results on the annotated PsychoFlickr database show that the proposed method is superior to the state-of-the-art approaches.

Volume 51
Pages 2105-2121
DOI 10.1007/s11063-019-09987-7
Language English
Journal Neural Processing Letters

Full Text