IEEE Transactions on Wireless Communications | 2019

Interference Management for Cellular-Connected UAVs: A Deep Reinforcement Learning Approach

 
 
 

Abstract


In this paper, an interference-aware path planning scheme for a network of cellular-connected unmanned aerial vehicles (UAVs) is proposed. In particular, each UAV aims at achieving a tradeoff between maximizing energy efficiency and minimizing both wireless latency and the interference caused on the ground network along its path. The problem is cast as a dynamic game among UAVs. To solve this game, a deep reinforcement learning algorithm, based on echo state network (ESN) cells, is proposed. The introduced deep ESN architecture is trained to allow each UAV to map each observation of the network state to an action, with the goal of minimizing a sequence of time-dependent utility functions. Each UAV uses the ESN to learn its optimal path, transmission power, and cell association vector at different locations along its path. The proposed algorithm is shown to reach a subgame perfect Nash equilibrium upon convergence. Moreover, an upper bound and a lower bound for the altitude of the UAVs are derived thus reducing the computational complexity of the proposed algorithm. The simulation results show that the proposed scheme achieves better wireless latency per UAV and rate per ground user (UE) while requiring a number of steps that are comparable to a heuristic baseline that considers moving via the shortest distance toward the corresponding destinations. The results also show that the optimal altitude of the UAVs varies based on the ground network density and the UE data rate requirements and plays a vital role in minimizing the interference level on the ground UEs as well as the wireless transmission delay of the UAV.

Volume 18
Pages 2125-2140
DOI 10.1109/TWC.2019.2900035
Language English
Journal IEEE Transactions on Wireless Communications

Full Text