ArXiv | 2019

How to Evaluate Proving Grounds for Self-Driving? A Quantitative Approach

 
 
 
 

Abstract


Proving ground has been a critical component in testing and validation for Connected and Automated Vehicles (CAV). Although quite a few world-class testing facilities have been under construction over the years, the evaluation of proving grounds themselves as testing approaches has rarely been studied. In this paper, we present the first attempt to systematically evaluate CAV proving grounds and contribute to a generative sample-based approach to assessing the representation of traffic scenarios in proving grounds. Leveraging typical use cases extracted from naturalistic driving events, we establish a strong link between proving ground testing results of CAVs and their anticipated public street performance. We present benchmark results of our approach on three world-class CAV testing facilities: Mcity, Almono (Uber ATG), and Kcity. We successfully show the overall evaluation of these proving grounds in terms of their capability to accommodate real-world traffic scenarios. We believe that when the effectiveness of a testing ground itself is validated, the testing results would grant more confidence for CAV public deployment.

Volume abs/1909.09079
Pages None
DOI 10.1109/tits.2020.2991757
Language English
Journal ArXiv

Full Text