2021 4th International Conference on Advanced Electronic Materials, Computers and Software Engineering (AEMCSE) | 2021

Research on mapping method based on data fusion of lidar and depth camera

 
 
 
 
 
 

Abstract


At present, mobile robots equipped with a single sensor in an indoor environment suffer from insufficient mapping accuracy and limited scanning range. Therefore, this paper proposes a method for data fusion mapping with three sensors, single-line lidar, depth camera and IMU. First, the depth data collected by the depth camera is reduced by dimensionality processing to make it two-dimensional, and then the environmental feature data scanned in the lidar, the pseudo laser data of the depth camera, and the pose information collected and calculated by the IMU are passed through Kalman. Filtering performs data fusion, that is, compensating the position and attitude errors caused by the lidar measurement. In the mapping phase, under the open source SLAM algorithm Gmapping, the two-dimensional local raster maps generated by the lidar that fuse multi-source data and the depth camera are merged with the local map based on the Bayes rule. Experiments show that the global map obtained by this method contains richer environmental information than a single sensor, which improves the accuracy of map construction and is beneficial to subsequent navigation obstacle avoidance research.

Volume None
Pages 360-365
DOI 10.1109/AEMCSE51986.2021.00082
Language English
Journal 2021 4th International Conference on Advanced Electronic Materials, Computers and Software Engineering (AEMCSE)

Full Text