2019 IEEE International Conference on Robotics and Biomimetics (ROBIO) | 2019

Deep Fusion of Multi-Layers Salient CNN Features and Similarity Network for Robust Visual Place Recognition*

 
 
 
 

Abstract


Severe changes in appearance and viewpoint of the same place caused by extreme weather and sharp turning are the major challenges for mobile robot’s autonomous navigation. Coping with these problems, and improving the performance of place recognition is more and more in the focus of robotics research. In this paper, we propose a novel visual place recognition approach with a deep fusion of the salient features and similarity network. The process of recognition can be simplified into two stages: feature representation and similarity function learning, which is trainable end-to-end. Instead of extracting features directly from the last convoluted layer of the network, the proposed approach extracts feature of the maxpooling layer of each convolutional module, and then fuses these features together. At this stage, the fully connected layer is not used. In order to measure visual similarity between representations in two images, at the second stage, a novel technology that learns similarity function directly from image representations to measure similarity between two images to rank images according to a similarity score is used. Extensive experiments have been conducted to evaluate the performance by comparison with existing state-of-the-art methods, experimental results on five challenging datasets show more accuracy and robustness of the proposed approach under severe changes in appearance and viewpoint.

Volume None
Pages 22-29
DOI 10.1109/ROBIO49542.2019.8961602
Language English
Journal 2019 IEEE International Conference on Robotics and Biomimetics (ROBIO)

Full Text