IEEE Transactions on Parallel and Distributed Systems | 2021

Fast Adaptive Task Offloading in Edge Computing Based on Meta Reinforcement Learning

 
 
 
 
 

Abstract


Multi-access edge computing (MEC) aims to extend cloud service to the network edge to reduce network traffic and service latency. A fundamental problem in MEC is how to efficiently offload heterogeneous tasks of mobile applications from user equipment (UE) to MEC hosts. Recently, many deep reinforcement learning (DRL)-based methods have been proposed to learn offloading policies through interacting with the MEC environment that consists of UE, wireless channels, and MEC hosts. However, these methods have weak adaptability to new environments because they have low sample efficiency and need full retraining to learn updated policies for new environments. To overcome this weakness, we propose a task offloading method based on meta reinforcement learning, which can adapt fast to new environments with a small number of gradient updates and samples. We model mobile applications as Directed Acyclic Graphs (DAGs) and the offloading policy by a custom sequence-to-sequence (seq2seq) neural network. To efficiently train the seq2seq network, we propose a method that synergizes the first order approximation and clipped surrogate objective. The experimental results demonstrate that this new offloading method can reduce the latency by up to 25 percent compared to three baselines while being able to adapt fast to new environments.

Volume 32
Pages 242-253
DOI 10.1109/TPDS.2020.3014896
Language English
Journal IEEE Transactions on Parallel and Distributed Systems

Full Text