Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Fuhai Chen is active.

Publication


Featured researches published by Fuhai Chen.


international conference on multimedia and expo | 2015

Multimodal hypergraph learning for microblog sentiment prediction

Fuhai Chen; Yue Gao; Donglin Cao; Rongrong Ji

Microblog sentiment analysis has attracted extensive research attention in the recent literature. However, most existing works mainly focus on the textual modality, while ignore the contribution of visual information that contributes ever increasing proportion in expressing user emotions. In this paper, we propose to employ a hypergraph structure to formulate textual, visual and emoticon information jointly for sentiment prediction. The constructed hypergraph captures the similarities of tweets on different modalities where each vertex represents a tweet and the hyperedge is formed by the “centroid” vertex and its k-nearest neighbors on each modality. Then, the transductive inference is conducted to learn the relevance score among tweets for sentiment prediction. In this way, both intra- and inter- modality dependencies are taken into consideration in sentiment prediction. Experiments conducted on over 6,000 microblog tweets demonstrate the superiority of our method by 86.77% accuracy and 7% improvement compared to the state-of-the-art methods.


acm multimedia | 2017

StructCap: Structured Semantic Embedding for Image Captioning

Fuhai Chen; Rongrong Ji; Jinsong Su; Yongjian Wu; Yunsheng Wu

Image captioning has attracted ever-increasing research attention in multimedia and computer vision. To encode the visual content, existing approaches typically utilize the off-the-shelf deep Convolutional Neural Network (CNN) model to extract visual features, which are sent to Recurrent Neural Network (RNN) based textual generators to output word sequence. Some methods encode visual objects and scene information with attention mechanism more recently. Despite the promising progress, one distinct disadvantage lies in distinguishing and modeling key semantic entities and their relations, which are in turn widely regarded as the important cues for us to describe image content. In this paper, we propose a novel image captioning model, termed StructCap. It parses a given image into key entities and their relations organized in a visual parsing tree, which is transformed and embedded under an encoder-decoder framework via visual attention. We give an end-to-end formulation to facilitate joint training of visual tree parser, structured semantic attention and RNN-based captioning modules. Experimental results on two public benchmarks, Microsoft COCO and Flickr30K, show that the proposed StructCap model outperforms the state-of-the-art approaches under various standard evaluation metrics.


Frontiers of Computer Science in China | 2016

Survey of visual sentiment prediction for social media analysis

Rongrong Ji; Donglin Cao; Yiyi Zhou; Fuhai Chen

Recent years have witnessed a rapid spread of multi-modality microblogs like Twitter and Sina Weibo composed of image, text and emoticon. Visual sentiment prediction of such microblog based social media has recently attracted ever-increasing research focus with broad application prospect. In this paper, we give a systematic review of the recent advances and cutting-edge techniques for visual sentiment analysis. To this end, in this paper we review the most recent works in this topic, in which detailed comparison as well as experimental evaluation are given over the cutting-edge methods. We further reveal and discuss the future trends and potential directions for visual sentiment prediction.


acm multimedia | 2015

A Cross-media Sentiment Analytics Platform For Microblog

Chao Chen; Fuhai Chen; Donglin Cao; Rongrong Ji

In this demo, a cross-media public sentiment analysis system is presented. The system presents and visualizes the sentiments of microblog data by organizing the results by region, topic, and content, respectively. Such sentiment is obtained by fusing of sentiment classification scores from both visual and textual channel. In such a way, social multimedia sentiment is shown in a multi-level and user-friendly form.


IEEE Transactions on Multimedia | 2018

Predicting Microblog Sentiments via Weakly Supervised Multimodal Deep Learning

Fuhai Chen; Rongrong Ji; Jinsong Su; Donglin Cao; Yue Gao

Predicting sentiments of multimodal microblogs composed of text, image, and emoticon have attracted ever-increasing research focus recently. The key challenge lies in the difficulty of collecting a sufficient amount of training labels to train a discriminative model for multimodal prediction. One potential solution is to exploit the labels collected from social media users, which is, however, restricted by the negative effect of label noise. Besides, we have quantitatively found that sentiments in different modalities may be independent, which disables the usage of previous multimodal sentiment analysis schemes in our problem. In this paper, we introduce a weakly supervised multimodal deep learning (WS-MDL) scheme toward robust and scalable sentiment prediction. WS-MDL learns convolutional neural networks iteratively and selectively from “weak” emoticon labels, which are cheaply available and noise containing. In particular, to filter out the label noise and to capture the modality dependency, a probabilistic graphical model is introduced to simultaneously learn discriminative multimodal descriptors and infer the confidence of label noise. Extensive evaluations are conducted in a million-scale, real-world microblog sentiment dataset crawled from Sina Weibo. We have validated the merits of the proposed scheme by quantitatively showing its superior performance over several state-of-the-art and alternative approaches.


pacific rim international conference on artificial intelligence | 2018

Topic-Guided Automatical Human-Simulated Tweeting System

Zongyue Liu; Fuhai Chen; Jinsong Su; Chen Shen; Rongrong Ji

Social network grows increasingly popular in our life nowadays. It’s an interesting intelligent behaviour to automatically post tweets in social network, which has not yet been explored. However, the associated researches are trapped in the problem of “multivalued mapping” where the agent should generate various appropriate tweets given a certain topic. In this paper, a human-simulated tweeting system is first designed to generate the multiple and appropriate tweets for given topics. In this system, a novel topic-image-tweet scheme is proposed with a Keyword-Based Retrieval Module (KBR-Module) and a Topic-Guided Image Captioning Module (TGIC-Module), where multiple topic-related images are searched in KBR-Module and encoded to generate the accurate tweets in TGIC-Module. The effectiveness of the proposed system and the superiority of our specific image captioning model are evaluated by sufficient quantitative comparisons and qualitative analysis in a real-world Twitter dataset.


eurographics | 2015

3D object retrieval with multimodal views

Yue Gao; Anan Liu; Weizhi Nie; Yuting Su; Qionghai Dai; Fuhai Chen; Yingying Chen; Yanhua Cheng; Shuilong Dong; Xingyue Duan; Jianlong Fu; Zan Gao; Haiyun Guo; Xin Guo; Kaiqi Huang; Rongrong Ji; Yingfeng Jiang; Haisheng Li; Hanqing Lu; Jianming Song; Jing Sun; Tieniu Tan; Jinqiao Wang; Huanpu Yin; Chaoli Zhang; Guotai Zhang; Yan Zhang; Chaoyang Zhao; Xin Zhao; Guibo Zhu


Archive | 2015

SHREC'15 Track: 3D Object Retrieval with Multimodal Views

Yue Gao; Anan Liu; Weizhi Nie; Yuting Su; Qionghai Dai; Fuhai Chen; Yingying Chen; Yanhua Cheng; Shuilong Dong; Xingyue Duan; Jianlong Fu; Zan Gao; Haiyun Guo; Xin Guo; Kaiqi Huang; Rongrong Ji; Yingfeng Jiang; Haisheng Li; Hanqing Lu; Jianming Song; Jing Sun; Tieniu Tan; Jinqiao Wang; Huanpu Yin; Chaoli Zhang; Guotai Zhang; Yan Zhang; Chaoyang Zhao; Xin Zhao; Guibo Zhu


computer vision and pattern recognition | 2018

GroupCap: Group-Based Image Captioning With Structured Relevance and Diversity Constraints

Fuhai Chen; Rongrong Ji; Xiaoshuai Sun; Yongjian Wu; Jinsong Su


IEEE Transactions on Multimedia | 2018

Cross-Modality Microblog Sentiment Prediction via Bi-Layer Multimodal Hypergraph Learning

Rongrong Ji; Fuhai Chen; Liujuan Cao; Yue Gao

Collaboration


Dive into the Fuhai Chen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chaoli Zhang

Beijing Technology and Business University

View shared research outputs
Top Co-Authors

Avatar

Chaoyang Zhao

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Guibo Zhu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Guotai Zhang

Tianjin University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge