IEEE Transactions on Intelligent Transportation Systems | 2019

Depth Embedded Recurrent Predictive Parsing Network for Video Scenes

 
 
 
 
 

Abstract


Semantic segmentation-based scene parsing plays an important role in automatic driving and autonomous navigation. However, most of the previous models only consider static images, and fail to parse sequential images because they do not take the spatial-temporal continuity between consecutive frames in a video into account. In this paper, we propose a depth embedded recurrent predictive parsing network (RPPNet), which analyzes preceding consecutive stereo pairs for parsing result. In this way, RPPNet effectively learns the dynamic information from historical stereo pairs, so as to correctly predict the representations of the next frame. The other contribution of this paper is to systematically study the video scene parsing (VSP) task, in which we use the RPPNet to facilitate conventional image paring features by adding spatial-temporal information. The experimental results show that our proposed method RPPNet can achieve fine predictive parsing results on cityscapes and the predictive features of RPPNet can significantly improve conventional image parsing networks in VSP task.

Volume 20
Pages 4643-4654
DOI 10.1109/TITS.2019.2909053
Language English
Journal IEEE Transactions on Intelligent Transportation Systems

Full Text