IEEE Transactions on Knowledge and Data Engineering | 2021

Propagation Enhanced Neural Message Passing for Graph Representation Learning

 
 
 
 
 

Abstract


Graph Neural Network (GNN) is capable of applying deep neural networks to graph domains. Recently, Message Passing Neural Networks (MPNNs) have been proposed to generalize several existing graph neural networks into a unified framework. For graph representation learning, MPNNs firstly generate discriminative node representations using the message passing function and then read from the node representation space to generate a graph representation using the readout function. In this paper, we analyze the representation capacity of the MPNNs for aggregating graph information and observe that the existing approaches ignore the self-loop for graph representation learning, leading to limited representation capacity. To alleviate this issue, we introduce a simple yet effective propagation enhanced extension, Self-Connected Neural Message Passing (SC-NMP), which aggregates the node representations of the current step and the graph representation of the previous step. To further improve the information flow, we also propose a Densely Self-Connected Neural Message Passing that connects each layer to every other layer in a feed-forward fashion. Extensive experiments on various benchmark datasets strongly demonstrate the effectiveness, leading to superior performance for graph classification and regression tasks.

Volume None
Pages None
DOI 10.1109/tkde.2021.3102964
Language English
Journal IEEE Transactions on Knowledge and Data Engineering

Full Text