IEEE Transactions on Network and Service Management | 2021

DMRO: A Deep Meta Reinforcement Learning-Based Task Offloading Framework for Edge-Cloud Computing

 
 

Abstract


With the explosive growth of mobile data and the unprecedented demand for computing power, resource-constrained edge devices cannot effectively meet the requirements of Internet of Things (IoT) applications and Deep Neural Network (DNN) computing. As a distributed computing paradigm, edge offloading that migrates complex tasks from IoT devices to edge-cloud servers can break through the resource limitation of IoT devices, reduce the computing burden and improve the efficiency of task processing. However, the problem of optimal offloading decision-making is NP-hard, traditional optimization methods are difficult to achieve results efficiently. Besides, there are still some shortcomings in existing deep learning methods, e.g., the slow learning speed and the weak adaptability to new environments. To tackle these challenges, we propose a Deep Meta Reinforcement Learning-based Offloading (DMRO) algorithm, which combines multiple parallel DNNs with Q-learning to make fine-grained offloading decisions. By aggregating the perceptive ability of deep learning, the decision-making ability of reinforcement learning, and the rapid environment learning ability of meta-learning, it is possible to quickly and flexibly obtain the optimal offloading strategy from a dynamic environment. We evaluate the effectiveness of DMRO through several simulation experiments, which demonstrate that when compared with traditional Deep Reinforcement Learning (DRL) algorithms, the offloading effect of DMRO can be improved by 17.6%. In addition, the model has strong portability when making real-time offloading decisions, and can fast adapt to a new MEC task environment.

Volume 18
Pages 3448-3459
DOI 10.1109/TNSM.2021.3087258
Language English
Journal IEEE Transactions on Network and Service Management

Full Text