2021 IEEE International Conference on Robotics and Automation (ICRA) | 2021

Feasible and Adaptive Multimodal Trajectory Prediction with Semantic Maneuver Fusion

 
 
 
 
 

Abstract


Predicting trajectories of participating vehicles is a crucial task towards full and safe autonomous driving. General unconstrained machine learning methods often report unrealistic predictions, and need to be combined with different motion constraints. Existing work either defines some shallow maneuvers and modes to regulate the output, or uses vehicle dynamics as the main source of constraints, for instance via kinematic models. In contrast, we present a new approach that guides the learning models by complex semantic maneuvers, constructing from both vehicle states and the surrounding objects. We propose a novel Maneuver Fusion layer to incorporate the logic-based semantic maneuvers into deep neural networks. We also incorporate and refine the different loss functions to account for the feasibility of the trajectories, adapting to different maneuver types. Finally, we design a hierarchical multi-task learning framework with adaptive loss to provide a multimodal trajectory prediction. Our method was evaluated on a large-scale real world data set for urban driving and was shown to give promising improvement over the states of the art.

Volume None
Pages 8530-8536
DOI 10.1109/ICRA48506.2021.9561380
Language English
Journal 2021 IEEE International Conference on Robotics and Automation (ICRA)

Full Text