2021 IEEE International Intelligent Transportation Systems Conference (ITSC) | 2021

Belief state separated reinforcement learning for autonomous vehicle decision making under uncertainty

 
 
 
 
 
 
 

Abstract


In autonomous driving, the ego vehicle and its surrounding traffic environments always have uncertainties like parameter and structural errors, behavior randomness of road users, etc. Furthermore, environmental sensors are noisy or even biased. This problem can be formulated as a partially observable Markov decision process. Existing methods lack a good representation of historical information, making it very challenging to find an optimal policy. This paper proposes a belief state separated reinforcement learning (RL) algorithm for decision-making of autonomous driving in uncertain environments. We extend the separation principle from linear Gaussian systems to general nonlinear stochastic environments, where the belief state, defined as the posterior distribution of the true state, is found to be a sufficient statistic of historical information. This belief state is estimated by action-enhanced variational inference from historical information and is proved to satisfy the Markovian property, thus allowing us to obtain the optimal policy using traditional RL algorithms for Markov decision processes. The policy gradient of a task-specific prior model is mixed with that of the interaction data to improve learning performance. The proposed algorithm is evaluated in a multi-lane autonomous driving task, where the surrounding vehicles are subject to behavior uncertainty and observation noise. The simulation results show that compared with existing RL algorithms, the proposed method can achieve a higher average return with better driving performance.

Volume None
Pages 586-592
DOI 10.1109/itsc48978.2021.9564576
Language English
Journal 2021 IEEE International Intelligent Transportation Systems Conference (ITSC)

Full Text