IEEE Transactions on Multimedia | 2019

Hierarchy-Dependent Cross-Platform Multi-View Feature Learning for Venue Category Prediction

 
 
 

Abstract


In this paper, we focus on visual venue category prediction, which can facilitate various applications for location-based service and personalization. Considering the complementarity of different media platforms, it is reasonable to leverage venue-relevant media data from different platforms to boost the prediction performance. Intuitively, recognizing one venue category involves multiple semantic cues, especially objects and scenes and, thus, they should contribute together to venue category prediction. In addition, these venues can be organized in a natural hierarchical structure, which provides prior knowledge to guide venue category estimation. Taking these aspects into account, we propose a Hierarchy-dependent Cross-platform Multi-view Feature Learning (HCM-FL) framework for venue category prediction from videos by leveraging images from other platforms. HCM-FL includes two major components, namely Cross-Platform Transfer Deep Learning (CPTDL) and Multi-View Feature Learning with the Hierarchical Venue Structure (MVFL-HVS). CPTDL is capable of reinforcing the learned deep network from videos using images from other platforms. Specifically, CPTDL first trained a deep network using videos. These images from other platforms are filtered by the learnt network and these selected images are then fed into this learnt network to enhance it. Two kinds of pre-trained networks on the ImageNet and Places dataset are employed. Therefore, we can harness both object-oriented and scene-oriented deep features through these enhanced deep networks. MVFL-HVS is then developed to enable multi-view feature fusion. It is capable of embedding the hierarchical structure ontology to support more discriminative joint feature learning. We conduct the experiment on videos from Vine and images from Foursquare. These experimental results demonstrate the advantage of our proposed framework in jointly utilizing multi-platform data, multi-view deep features, and hierarchical venue structure knowledge.

Volume 21
Pages 1609-1619
DOI 10.1109/TMM.2018.2876830
Language English
Journal IEEE Transactions on Multimedia

Full Text