Comput. Graph. | 2021

LinkNet: 2D-3D linked multi-modal network for online semantic segmentation of RGB-D videos

 
 
 
 

Abstract


Abstract This paper proposes LinkNet, a 2D-3D linked multi-modal network served for online semantic segmentation of RGB-D videos, which is essential for real-time applications such as robot navigation. Existing methods for RGB-D semantic segmentation usually work in the regular image domain, which allows efficient processing using convolutional neural networks (CNNs). However, RGB-D videos are captured from a 3D scene, and different frames can contain useful information of the same local region from different views. Working solely in the image domain fails to utilize such crucial information. Our novel approach is based on joint 2D and 3D analysis. The online process is realized simultaneously with 3D scene reconstruction, from which we set up 2D-3D links between continuous RGB-D frames and 3D point cloud. We combine image color and view-insensitive geometric features generated from the 3D point cloud for multi-modal semantic feature learning. Our LinkNet further uses a recurrent neural network (RNN) module to dynamically maintain the hidden semantic states during 3D fusion, and refines the voxel-based labeling results. The experimental results on SceneNet\xa0[1] and ScanNet\xa0[2] demonstrate that the semantic segmentation results of our framework are stable and effective.

Volume 98
Pages 37-47
DOI 10.1016/J.CAG.2021.04.013
Language English
Journal Comput. Graph.

Full Text