IEEE Robotics and Automation Letters | 2019

Deep Active Localization

 
 
 
 
 
 

Abstract


Active localization consists of generating robot actions that allow it to maximally disambiguate its pose within a reference map. Traditional approaches use an information-theoretic criterion for action selection and hand-crafted perceptual models. In this work we propose an end-to-end differentiable method for learning to take informative actions that is trainable entirely in simulation and then transferable to real robot hardware with zero refinement. The system is composed of two learned modules: a convolutional neural network for perception, and a deep reinforcement learned planning module. We leverage a multi-scale approach in the perceptual model since the accuracy needed to take actions using reinforcement learning is much less than the accuracy needed for robot control. We demonstrate that the resulting system outperforms traditional approach for either perception or planning. We also demonstrate our approach s robustness to different map configurations and other nuisance parameters through the use of domain randomization in training. The code has been released: https://github.com/montrealrobotics/dal and is compatible with the OpenAI gym framework, as well as the Gazebo simulator.

Volume 4
Pages 4394-4401
DOI 10.1109/LRA.2019.2932575
Language English
Journal IEEE Robotics and Automation Letters

Full Text