2021 IEEE International Conference on Robotics and Automation (ICRA) | 2021

Double Meta-Learning for Data Efficient Policy Optimization in Non-Stationary Environments

 
 

Abstract


We are interested in learning models of non-stationary environments, which can be framed as a multitask learning problem. Model-free reinforcement learning algorithms can achieve good asymptotic performance in multitask learning at a cost of extensive sampling, due to their approach, which requires learning from scratch. While model-based approaches are among the most data efficient learning algorithms, they still struggle with complex tasks and model uncertainties. Meta-reinforcement learning addresses the efficiency and generalization challenges on multi task learning by quickly leveraging the meta-prior policy for a new task. In this paper, we propose a meta-reinforcement learning approach to learn the dynamic model of a non-stationary environment to be used for meta-policy optimization later. Due to the sample efficiency of model-based learning methods, we are able to simultaneously train both the meta-model of the non-stationary environment and the meta-policy until dynamic model convergence. Then, the meta-learned dynamic model of the environment will generate simulated data for meta-policy optimization. Our experiment demonstrates that our proposed method can meta-learn the policy in a non-stationary environment with the data efficiency of model-based learning approaches while achieving the high asymptotic performance of model-free meta-reinforcement learning.

Volume None
Pages 9935-9942
DOI 10.1109/ICRA48506.2021.9561219
Language English
Journal 2021 IEEE International Conference on Robotics and Automation (ICRA)

Full Text