2019 IEEE Global Communications Conference (GLOBECOM) | 2019

Deep Reinforcement Learning-Based Dynamic Service Migration in Vehicular Networks

 
 
 
 
 

Abstract


Mobile edge computing (MEC)-enabled vehicular networks can improve the quality of service (QoS) of vehicular networks, such as the round-trip time (RTT) and transmission control protocol (TCP) throughput. However, the high mobility of vehicles requires frequent service migrations among MEC servers to maintain the QoS. Frequent service migrations incur prohibitive migration cost. To achieve the tradeoff between the QoS and migration cost, this paper proposes a novel dynamic service migration scheme, which considers the effect of velocities of vehicles. The main idea is that the QoS and migration cost are modeled as the function of velocity, and then they are jointly considered economically. The system captures incomes from vehicles according to their QoS. The cost (expenditure) consists of the migration cost and service cost for the use of computing, communication and memory resources to provide service. The system utility is defined as the difference between incomes and costs. A novel deep reinforcement learning algorithm, i.e., deep Q-learning is employed to maximize the system utility, by designing dynamic service migration scheme. Simulation results show that compared with existing migration schemes, the proposed dynamic scheme can increase the system utility with various velocities. The system utility gain increases as the velocity increases, which reaches about 2 times when the velocity is larger than 30m/s. Moreover, it improves the QoS of vehicles with the higher mobility, where the RTT can be decreased by 2 times, and the TCP throughput can be increased by 1 time.

Volume None
Pages 1-6
DOI 10.1109/GLOBECOM38437.2019.9014294
Language English
Journal 2019 IEEE Global Communications Conference (GLOBECOM)

Full Text