J. Sensors | 2021

Reinforcement Learning Guided by Double Replay Memory

 
 
 
 
 
 
 
 

Abstract


Experience replay memory in reinforcement learning enables agents to remember and reuse past experiences. Most of the reinforcement models are subject to single experience replay memory to operate agents. In this article, we propose a framework that accommodates doubly used experience replay memory, exploiting both important transitions and new transitions simultaneously. In numerical studies, the deep \n \n Q\n \n -networks (DQN) equipped with double experience replay memory are examined under various scenarios. A self-driving car requires an automated agent to figure out when to adequately change lanes on the real-time basis. To this end, we apply our proposed agent to the simulation of urban mobility (SUMO) experiments. Besides, we also verify its applicability to reinforcement learning whose action space is discrete (e.g., computer game environments). Taken all together, we conclude that the proposed framework outperforms priorly known reinforcement learning models in the virtue of double experience replay memory.

Volume 2021
Pages 6652042:1-6652042:8
DOI 10.1155/2021/6652042
Language English
Journal J. Sensors

Full Text