2021 IEEE International Conference on Consumer Electronics and Computer Engineering (ICCECE) | 2021

An Efficient Deep Reinforcement Learning Based Distributed Channel Multiplexing Framework for V2X Communication Networks

 
 
 
 

Abstract


It is crucial to multiplex channel resources efficiently in wireless networks due to the link interference and wireless spectrum scarcity. In this paper, we study the allocation problem of channel resources in Vehicle-to-Everything communication networks. We model this problem as a decentralized Markov Decision Process, where each V2V Agent independently decides its channel and power level based on the local environmental observations and global network reward. Then, a multi-agent distributed channel resource multiplexing framework based on Deep Reinforcement Learning is proposed to derive the best joint resources allocation solution. Furthermore, Prioritized DDQN algorithm is used to provide a more accurate estimation target for the action evaluation and can effectively reduce Q-Values’ overestimation. The extensive experimental results show that the proposed framework can achieve better performances than the existing works in terms of both the capacity sum of V2I channels and the package delivery success ratios of V2V links.

Volume None
Pages 154-160
DOI 10.1109/ICCECE51280.2021.9342305
Language English
Journal 2021 IEEE International Conference on Consumer Electronics and Computer Engineering (ICCECE)

Full Text