Archive | 2019

Compound Asynchronous Exploration and Exploitation

 
 
 
 
 

Abstract


Data efficiency has always been a significant key topic for deep reinforcement learning. The main progress has been on sufficient exploration and effective exploitation. However, the two are often discussed separately. Profit from distributed systems, we propose an asynchronous approach to deep reinforcement learning by combining exploration and exploitation. We apply our framework to off-the-shelf deep reinforcement learning algorithms, and experimental results show that our algorithm is superior in final performance and efficiency.

Volume None
Pages None
DOI 10.23977/MEET.2019.93759
Language English
Journal None

Full Text