Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Paul Pu Liang is active.

Publication


Featured researches published by Paul Pu Liang.


international conference on multimodal interfaces | 2017

Multimodal sentiment analysis with word-level fusion and reinforcement learning

Minghai Chen; Sen Wang; Paul Pu Liang; Tadas Baltrusaitis; Amir Zadeh; Louis-Philippe Morency

With the increasing popularity of video sharing websites such as YouTube and Facebook, multimodal sentiment analysis has received increasing attention from the scientific community. Contrary to previous works in multimodal sentiment analysis which focus on holistic information in speech segments such as bag of words representations and average facial expression intensity, we propose a novel deep architecture for multimodal sentiment analysis that is able to perform modality fusion at the word level. In this paper, we propose the Gated Multimodal Embedding LSTM with Temporal Attention (GME-LSTM(A)) model that is composed of 2 modules. The Gated Multimodal Embedding allows us to alleviate the difficulties of fusion when there are noisy modalities. The LSTM with Temporal Attention can perform word level fusion at a finer fusion resolution between the input modalities and attends to the most important time steps. As a result, the GME-LSTM(A) is able to better model the multimodal structure of speech through time and perform better sentiment comprehension. We demonstrate the effectiveness of this approach on the publicly-available Multimodal Corpus of Sentiment Intensity and Subjectivity Analysis (CMU-MOSI) dataset by achieving state-of-the-art sentiment classification and regression results. Qualitative analysis on our model emphasizes the importance of the Temporal Attention Layer in sentiment prediction because the additional acoustic and visual modalities are noisy. We also demonstrate the effectiveness of the Gated Multimodal Embedding in selectively filtering these noisy modalities out. These results and analysis open new areas in the study of sentiment analysis in human communication and provide new models for multimodal fusion.


arXiv: Human-Computer Interaction | 2018

Multimodal Local-Global Ranking Fusion for Emotion Recognition

Paul Pu Liang; Amir Zadeh; Louis-Philippe Morency

Emotion recognition is a core research area at the intersection of artificial intelligence and human communication analysis. It is a significant technical challenge since humans display their emotions through complex idiosyncratic combinations of the language, visual and acoustic modalities. In contrast to traditional multimodal fusion techniques, we approach emotion recognition from both direct person-independent and relative person-dependent perspectives. The direct person-independent perspective follows the conventional emotion recognition approach which directly infers absolute emotion labels from observed multimodal features. The relative person-dependent perspective approaches emotion recognition in a relative manner by comparing partial video segments to determine if there was an increase or decrease in emotional intensity. Our proposed model integrates these direct and relative prediction perspectives by dividing the emotion recognition task into three easier subtasks. The first subtask involves a multimodal local ranking of relative emotion intensities between two short segments of a video. The second subtask uses local rankings to infer global relative emotion ranks with a Bayesian ranking algorithm. The third subtask incorporates both direct predictions from observed multimodal behaviors and relative emotion ranks from local-global rankings for final emotion prediction. Our approach displays excellent performance on an audio-visual emotion recognition benchmark and improves over other algorithms for multimodal fusion.


national conference on artificial intelligence | 2018

Multi-attention Recurrent Network for Human Communication Comprehension

Amir Zadeh; Paul Pu Liang; Soujanya Poria; Erik Cambria; Prateek Vij; Louis-Philippe Morency


national conference on artificial intelligence | 2018

Memory Fusion Network for Multi-view Sequential Learning

Amir Zadeh; Paul Pu Liang; Navonil Mazumder; Soujanya Poria; Erik Cambria; Louis-Philippe Morency


arXiv: Computation and Language | 2018

Seq2Seq2Sentiment: Multimodal Sequence to Sequence Models for Sentiment Analysis

Hai Pham; Thomas Manzini; Paul Pu Liang; Barnabas Poczos


meeting of the association for computational linguistics | 2018

Multimodal Language Analysis in the Wild: CMU-MOSEI Dataset and Interpretable Dynamic Fusion Graph.

AmirAli Bagher Zadeh; Paul Pu Liang; Soujanya Poria; Erik Cambria; Louis-Philippe Morency


meeting of the association for computational linguistics | 2018

Human Multimodal Language in the Wild: A Novel Dataset and Interpretable Dynamic Fusion Model

AmirAli Bagher Zadeh; Paul Pu Liang; Soujanya Poria; Erik Cambria; Louis-Philippe Morency


meeting of the association for computational linguistics | 2018

Efficient Low-rank Multimodal Fusion With Modality-Specific Factors

Zhun Liu; Ying Shen; Varun Bharadhwaj Lakshminarasimhan; Paul Pu Liang; AmirAli Bagher Zadeh; Louis-Philippe Morency


international conference on communications | 2018

A Machine Learning Approach to MIMO Communications

Yu-Di Huang; Paul Pu Liang; Qianqian Zhang; Ying-Chang Liang


empirical methods in natural language processing | 2018

Multimodal Language Analysis with Recurrent Multistage Fusion

Paul Pu Liang; Ziyin Liu; AmirAli Bagher Zadeh; Louis-Philippe Morency

Collaboration


Dive into the Paul Pu Liang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Amir Zadeh

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Erik Cambria

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

AmirAli Bagher Zadeh

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Qianqian Zhang

University of Electronic Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Ying-Chang Liang

University of Electronic Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Yu-Di Huang

University of Electronic Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Minghai Chen

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge