2019 IEEE International Conference on Robotics and Biomimetics (ROBIO) | 2019

Robot Control in Human Environment using Deep Reinforcement Learning and Convolutional Neural Network

 
 
 
 
 
 

Abstract


Deep reinforcement learning (DRL) has been employed in numerous applications where complex decision-making is needed. Robot control in a human environment is an example. Such algorithm offers possibilities to achieve end-to-end training which learns from image directly. However, training on a physical robotic system under human environments using DRL is inefficient and even dangerous. Several recent works have used simulators for training models before implementing to physical robots. Although simulation provides efficiency to obtain DRL trained models, it poses challenges for the transformation from simulation to reality. Since a human environment is often cluttered, dynamic and complex, the policy trained with simulation images is not applicable for reality. Therefore, in this paper, we propose a DRL method to achieve end-to-end training in simulation, as well as to adapt to reality without any further finetune. Firstly, a Deep Deterministic Policy Gradient algorithm (DDPG) is employed to learn policy for robot control. Secondly, a pre-trained Convolutional Neural Network algorithm (CNN) is used to visually track the target in image. This technique provides the efficient and safe DRL training in simulation while offering robust application when a real robot is placed in dynamic human environment. Simulation and experiment are conducted for validation and can be seen in the attached video. The results have shown successful demonstration under various complex environments.

Volume None
Pages 1121-1126
DOI 10.1109/ROBIO49542.2019.8961517
Language English
Journal 2019 IEEE International Conference on Robotics and Biomimetics (ROBIO)

Full Text