2021 IEEE International Conference on Robotics and Automation (ICRA) | 2021

What My Motion tells me about Your Pose: A Self-Supervised Monocular 3D Vehicle Detector

 
 
 
 

Abstract


The estimation of the orientation of an observed vehicle relative to an Autonomous Vehicle (AV) from monocular camera data is an important building block in estimating its 6 DoF pose. Current Deep Learning based solutions for placing a 3D bounding box around this observed vehicle are data hungry and do not generalize well. In this paper, we demonstrate the use of monocular visual odometry for the self-supervised fine-tuning of a model for orientation estimation pre-trained on a reference domain. Specifically, while transitioning from a virtual dataset (vKITTI) to nuScenes, we recover up to 70% of the performance of a fully supervised method. We subsequently demonstrate an optimization-based monocular 3D bounding box detector built on top of the self-supervised vehicle orientation estimator without the requirement of expensive labeled data. This allows 3D vehicle detection algorithms to be self-trained from large amounts of monocular camera data from existing commercial vehicle fleets.

Volume None
Pages 13293-13300
DOI 10.1109/ICRA48506.2021.9562086
Language English
Journal 2021 IEEE International Conference on Robotics and Automation (ICRA)

Full Text