IEEE Robotics and Automation Letters | 2019

Sliding-Window Temporal Attention Based Deep Learning System for Robust Sensor Modality Fusion for UGV Navigation

 
 
 
 

Abstract


We propose a novel temporal attention based neural network architecture for robotics tasks that involve fusion of time series of sensor data, and evaluate the performance improvements in the context of autonomous navigation of unmanned ground vehicles (UGVs) in uncertain environments. The architecture generates feature vectors by fusing raw pixel and depth values collected by camera(s) and LiDAR(s), stores a history of the generated feature vectors, and incorporates the temporally attended history with current features to predict a steering command. The experimental studies show the robust performance in unknown and cluttered environments. Furthermore, the temporal attention is resilient to noise, bias, blur, and occlusions in the sensor signals. We trained the network on indoor corridor datasets (that will be publicly released) from our UGV. The datasets have LiDAR depth measurements, camera images, and human tele-operation commands.

Volume 4
Pages 4216-4223
DOI 10.1109/LRA.2019.2930475
Language English
Journal IEEE Robotics and Automation Letters

Full Text