Elevated LiDAR based Sensing for 6G -- 3D Maps with cm Level Accuracy
Madhushanka Padmal, Dileepa Marasinghe, Vijitha Isuru, Nalin Jayaweera, Samad Ali, Nandana Rajatheva
EElevated LiDAR based Sensing for 6G- 3D Maps with cm Level Accuracy
Madhushanka Padmal , Dileepa Marasinghe, Vijitha Isuru,Nalin Jayaweera, Samad Ali, Nandana Rajatheva Centre for Wireless Communications, University of Oulu,
Oulu, Finland { dileepa.marasinghe, vijitha.isuru, nalin.jayaweera, samad.ali, nandana.rajatheva } @oulu.fi [email protected] Abstract —One key vertical application that will be enabledby 6G is the automation of the processes with the increaseduse of robots. As a result, sensing and localization of the sur-rounding environment becomes a crucial factor for these robotsto operate. Light detection and ranging (LiDAR) has emergedas an appropriate method of sensing due to its capability ofgenerating detail-rich information with high accuracy. However,LiDARs are power hungry devices that generate a lot of data,and these characteristics limit their use as on-board sensorsin robots. In this paper, we present a novel approach on themethodology of generating an enhanced 3D map with improvedfield-of-view using multiple LiDAR sensors. We utilize an inherentproperty of LiDAR point clouds; rings and data from the inertialmeasurement unit (IMU) embedded in the sensor for registrationof the point clouds. The generated 3D map has an accuracyof 10 cm when compared to the real-world measurements. Wealso carry out the practical implementation of the proposedmethod using two LiDAR sensors. Furthermore, we develop anapplication to utilize the generated map where a robot navigatesthrough the mapped environment with minimal support from thesensors on-board. The LiDARs are fixed in the infrastructureat elevated positions. Thus this is applicable to vehicular andfactory scenarios. Our results further validate the idea of usingmultiple elevated LiDARs as a part of the infrastructure forvarious applications.
Keywords — I. I
NTRODUCTION
Innovation in wireless communications lead us towards thesixth-generation (6G) of communication networks that aimto deliver 1 Tbps peak data rate with 100 µ s latency [1].This opens up pathways to many vertical applications thatcan utilize high data rates and low latency links. Technologiessuch as mmWave and THz communications, massive MIMOalong with artificial intelligence have become key enablers forrealizing such performances. One key challenge in utilizingthese is to acquire precise positioning information. Positioningtechnologies such as global positioning system (GPS) areavailable for outdoor scenarios while indoor positioning is stilla research challenge [2].A promising vertical application of 6G is the factoryautomation where multiple robots perform their navigationand tasks with minimal human intervention [3]. Currently,these mobile robots navigate utilizing on-board sensors suchas cameras, proximity sensors and LiDARs. Moreover, theserobots lack the perception of the whole environment as the on-board sensors have a limitation in both range and field-of-view (FoV). One solution to overcome this is to have inter-robot communications. However, sharing sensor informationbetween mobile robots introduces a heavy burden on the com-munication links as those sensing technologies produce a hugeamount of data which is in Gbps range. In our previous works,we proposed an infrastructure based sensing architecture witha coordinated set of elevated LiDARs (ELiDs) which developsa digital twin of the environment [4], [5]. This architecturehas the potential of delivering a solution to the positioningproblem while overcoming the issue of limited perception ofthe environment to the robot.LiDAR sensor is an active sensor which generates a pointcloud using time of flight measurements between the emissionof light and its return to the device. In contrast to cameras,LiDARs do not require pre-illumination of the environmentand the point clouds generated are sparse compared to images.These features qualify LiDARs as sensors for automatednavigation tasks in automobile industry and robotics. Mergingthe LiDAR streams from multiple LiDARs integrated to in-frastructure as proposed, compensates for occlusion formed byobstacles, reduces sparsity and provides the potential of scalingup. Furthermore, it can be utilized to generate navigationcommands for the mobile robots which can then operate witha minimal number of on-board sensors [6]. Contributions.
In this paper we propose a novel approachto generate an elevated LiDAR map (ELiD Map) with cmlevel accuracy by merging multiple point clouds using theirfeatures and embedded sensors in the LiDAR. Enabled by theELiD Map, we also present an application where navigationof a robot is carried out along the shortest path between twopoints in the mapped environment.The rest of the paper is organized as follows. In SectionII, we describe the related work on the problem of mergingpoint clouds and robot path planning based on point clouds.Next, we describe our proposed method of merging the pointclouds to generate ELiD Map in Section III. In Section IV ,the path planning based on the ELiD Map is presented. Theresults of the proposed method with comparison to real-worldmeasurements and the navigation of the robot are presented inSection V. Finally Section VI concludes the paper.II. R
ELATED W ORK
The problem of merging two point clouds has been inresearch for several years and multiple algorithms have been a r X i v : . [ ee ss . SP ] F e b eveloped providing solutions as a result. Most of thesemethods are based on iterative closet point (ICP) algorithmto accurately register 3D point clouds. Even though ICP isa popular algorithm, there are a few limitations. It requiresa proper initial value with accurate correspondence points[7] and approximate registration between two point clouds toprevent it from falling into local extremes [8]. Attempting toestimate this registration automatically fails at instances whenthere are multiple similar views in point clouds [9]. Instead,specific features such as curvature of objects is detected priorto estimating transformation to improve the iteration speedand accuracy in registering points using kd-trees [8], [10].Even with such improvements, point clouds still need to beregistered approximately.When merging two point clouds, first step is to rotate andtranslate one point cloud while the other is kept fixed. Thenboth point clouds are concatenated yielding the merged pointcloud. There are two main approaches in literature to generatemerged point clouds which are as follows:
1) Reference object based transformation:
While estimat-ing the transformation matrix, restricting the view of a pointcloud to a single object improves the transformation accu-racy. Such an object is taken as a reference by multipleLiDAR sensors and a transformation is initially estimated.This transformation is then directly applied to the point cloudenclosing the object. This object should be rotation variantunlike cylindrical objects to help capture the correct viewpoint [7]. Multiple reference objects are used in literaturenot limited to cubes [11], tetrahedrons [12], planes and wallboundaries [7] to generate optimum results. It would requirecomplete or partly manual intervention to identify where theobject is placed inside the point cloud [13]. There are instanceswhen cubes and tetrahedrons may fail being optimum referenceobjects due to the lack of knowledge of the surroundingenvironment. Figure 1(a) shows a point cloud segment capturedfrom a LiDAR placed in front of a cubical. Figure 1(b) is fromanother LiDAR placed behind the same object and the pointslook more organized. Reference objects based transformationwould disregard the 180 ◦ rotation of frames as they lookidentical. In such scenarios, cameras are used to gain a betterinsight about the surroundings.
2) Camera based point cloud transformation:
Camerabased methods utilize intrinsic parameters such as aspectratio and extrinsic parameters such as relative position of thecamera. World frame in a point cloud is identified using theseparameters and a planar surface is estimated from LiDARmeasurements[7]. Cameras need to be calibrated to identifysurfaces and powerful methods exist in image processing do-main for that. Such methods process and merge images to usealongside LiDARs to estimate the transformation parameters[11]. Rodrigues et al. have developed a manual calibrationmethod with a plane having circular holes. The reference sim-plicity reduces image noise, however, it does not guarantee anoptimum estimate [14]. An automated solution was introducedby Alismail et al. where they use a center marked black circleand it can be used for calibrations during operation as well[15]. Object detection problem with a planar checkerboardpattern is used by Geiger et al. to identify extrinsic cameraparameters for faster calibration. Still, it requires multiplereferences to localize in an environment [16]. (a) LiDAR 1 view (b) LiDAR 2 view
Fig. 1: Different view points from LiDAR sensorsPoint cloud aided navigation has been in research for manyyears. Zhang et al. have developed simultaneous localizationand mapping (SLAM) based architecture to navigate a robotusing on-board LiDAR sensors [17]. Extending the work onSLAM, Mahmood et al. describes a method using a velocityramp to solve the drifting issue in robots moving in a straightline [18]. Vandapel et al. propose a laser detection and ranging(LADAR) based solution to navigate robots in unstructuredcluttered environments [19]. Also, path planning based onvoxelized point clouds to represent the environment in a 3Dgrid can be found in literature [20].III. P
ROPOSED S OLUTION
We propose a two-step solution to generate an ELiD Map.The first step estimates rotation parameters of point cloudsand the second step estimates translation parameters. The twoparameter sets are combined to generate the transformationmatrix.The process of building the point cloud map of thesurrounding environment starts by collecting data frames frommultiple LiDAR sensors mounted at different view points. Letthe fixed point cloud chosen as reference be denoted by S .Every other point cloud M i ∈ R K M i × for i ∈ (1 , . . . , N − where N is the total number of input point clouds and K M i is the number of points in the point cloud M i is rotatedand translated with respect to the reference point cloud S togenerate a transformed point cloud M (cid:48) i . All M (cid:48) i point cloudsare then concatenated with S to generate the ELiD Map Υ .A point cloud M can be rotated and translated about thethree major axes (x,y and z) by multiplying itself with atransformation matrix T ∈ R × as M T . The T matrix is acombination of a rotation matrix R ∈ R × and a translationvector t ∈ R × . They are combined with a × identitymatrix, leaving the fourth row intact to generate the T matrix. A. Rotation parameters
The first step is to estimate roll ( φ ), pitch ( θ ) and yaw( ψ ) angles around x, y and z axes respectively. The roll ( φ )and pitch ( θ ) estimations are based on inertial measurementunit (IMU) sensor readings embedded in the point cloud dataframes. This differs from state of the art methods where theyuse a whole point cloud or a part of it to estimate rotationparameters using ICP algorithm based implementations [9].
1) Estimating roll ( φ ) and pitch ( θ ) angles: Relative ro-tation around x and y axes of a point cloud M with respectto a reference point cloud S , is estimated using IMU sensor
100 200 300 400 500Time (ms)891011 A cc e l e r a t i o n ( m s − ) LiDAR 2 - Z AxisLiDAR 1 - Z Axis
Fig. 2: Variations and offsets in IMU readings.readings. These sensors are inbuilt with LiDAR modules andthey capture linear acceleration values along x, y and z axeskeeping the horizontal plane of LiDAR sensor as the xy plane.Let the linear acceleration values along x, y and z axes of theLiDAR sensor generating the point cloud be g x M , g y M and g z M . Similarly, let the corresponding linear acceleration valuesof the LiDAR sensor generating the reference point cloud be g x S , g y S and g z S . The relative roll ( φ ) and pitch ( θ ) anglescan be derived using, φ = tan − g y M (cid:112) g x S + g z S − g y S (cid:112) g x M + g z M g y M g y S + (cid:113)(cid:0) g x M + g z M (cid:1) (cid:0) g x S + g z S (cid:1) , (1)and θ = tan − (cid:20) g x S g z M − g x M g z S g z M g z S + g x M g x S (cid:21) . (2)This method cannot be directly implemented using LiDARsensors out of the box. The reason is that, even when two suchsensors are placed with the same orientation, they will notproduce identical readings, as seen from Figure 2. It requiresa calibration step to nullify these offsets. In this calibrationstep, six sets of readings are taken by placing a LiDAR sensorfacing towards and outwards from x, y and z axes. Thesereadings consist of three sets of minimum and maximum linearacceleration values along each axis. Let the time average ofthese values along an arbitrary axis ω be g ω (min) and g ω (max) .The corrected linear acceleration value g ωc along the same axisfor a reading g ωr can then be interpreted as, g ωc = 2 × . × g ωr − g ω (min) g ω (max) − g ω (min) − . . (3)The factor . in (3) which is the acceleration dueto gravity is extracted from LiDAR sensor datasheet in [21].Instead of raw readings from IMU sensor, these calibratedvalues should be used in (1) and (2). Even with the calibratedreadings, use of a single calculated value is discouraged dueto the varying nature of IMU readings following a normaldistribution. Accordingly, the time average value of multiplereadings will likely yield more accurate measurements.
2) Estimating yaw ( ψ ) angle: Relative rotation betweentwo point clouds along z-axis is estimated using a feature inLiDAR point clouds. These point clouds consist of set of pointrings. These rings can be interpreted as slices of the point cloudalong z-axis. The first and last sets of point rings are highlywarped. The middle set of point rings are almost parallel to theLiDAR xy-plane. This behavior of point rings prompts us tochoose the middle point ring to estimate the relative yaw angle. Fig. 3: Selecting point segments to estimate relative yaw angleFirst step in this process is to choose two point segments onefrom the middle point ring in each point cloud M (red) andthe reference point cloud S (black) as shown by two arrowsin Figure 3.When selecting these point segments, the user must con-sider choosing segments with at least three consecutive pointssuch that the points approximately lie on a straight linecapturing the same region of the environment represented inthe point cloud. Colored blobs in Figure 3 satisfy these threerequirements and the two point segments are then processedwith random sample consensus (RANSAC) algorithm [22] toestimate a first order polynomial in the form of f ( x ) = mx + c where m is the gradient and which can also be representedas tan( ϑ ) . Assume that for the selected set of points M = { M i | M i ∈ M , i = 1 , · · · , } the return value from RANSACalgorithm is its gradient tan( ϑ M ) . Similarly for a selected setof points S = { S i | S i ∈ S , i = 1 , · · · , } the return value isits gradient tan( ϑ S ) . Then the relative yaw ( ψ ) angle betweenthe two point clouds can be calculated using ψ = tan − ( ϑ M ) − tan − ( ϑ S ) . (4)Point cloud data points from a LiDAR sensor are alwaysembedded with a noise component in the time of flightmeasurement readings. Hence the coordinate values of eachpoint in the point cloud always vary along all three axes. Thisvariance depends on few factors such as the type and materialof the reflecting surface, Angle of Attack (AoA) on thereflective surface, distance from LiDAR sensor to the surface,timing errors in LiDAR sensor etc. Therefore obtaining aninstantaneous value for the relative yaw angle using (4) byconsidering only a single instance of points segment is notguaranteed to produce a valid estimate for the required yawangle. As a result, multiple readings are recorded on the samepoint segment selection to get multiple angle estimates and theaverage value is considered as the correct relative yaw angle.The relative roll, pitch and yaw angles can then be used topopulate the rotation matrix R as, (cid:34) c ( ψ ) − s ( ψ ) 0 s ( ψ ) c ( ψ ) 00 0 1 (cid:35) (cid:34) c ( θ ) 0 s ( θ )0 1 0 − s ( θ ) 0 c ( θ ) (cid:35) (cid:34) c ( φ ) − s ( φ )0 s ( φ ) c ( φ ) (cid:35) , (5)where c ( . ) = cos( . ) and s ( . ) = sin( . ) . B. Translation parameters
With the rotation matrix estimated, next step is to estimatethe parameters for linear translation. This starts by rotating thepoint cloud using the estimated rotation matrix R in (5). Let usdenote the rotated point cloud as an intermediate point cloud M . The two point clouds will now only have a linear offsetbetween them. Estimating the linear translation parametersimply the same as estimating these offset values. This processis carried out along one axis and then continued with anotheraxis until all three axes are covered. The implementation isderived from ICP algorithm. User will then select two points from a planar surface, one from the point cloud and the otherfrom the reference point cloud. When selecting these points,the following requirements should be satisfied. • Both points have to be on fairly flat surfaces facingeach other along the selected axis. • Surfaces on which the points are preferred to be fromthe same region in real environment. • If they are not on the same region, the surfaces shouldbelong to a similar planar region (eg: long runningwalls).Elaborating more on these requirements, fairly flat surfacesare required to ensure that the neighboring points surroundingthe selected point lie approximately on the same plane. Thesurfaces on which the points are located, should be the samesurface. If not, they should at least belong to the same regionin the environment to ensure that the offset is estimated againsta common viewpoint. Such a scenario could occur when thetwo point clouds do not have an overlapping region in thereal environment, but they capture two different segments ofa single long wall. User intervention is required at this pointto select such two points from the point clouds subject to thegiven constraints.Once the two points are selected, four neighboring pointsclosest to the selected point are filtered from each point cloud.When picking these four points, two of them are selected fromthe same point ring. The other two points picked from the tworings above and below. The filtered points in reference pointcloud can be represented as S ω = { S i | S i ∈ S , i = 1 , · · · , } .Similarly the filtered points from intermediate point cloud ˆ M can be represented as ˆ M ω = { ˆ M i | ˆ M i ∈ ˆ M , i = 1 , · · · , } .The two sets of points ˆ M ω and S ω are processed with theICP algorithm to estimate the translation vector componentalong the axis of interest. Let this axis be ω . Usually the ICPalgorithm will return a complete transformation matrix. Disre-garding the rotational components, the final column where thetranslation vector is located is extracted. From this translationvector, the value corresponds to the ω -axis is extracted and itwill be the offset of the intermediate point cloud ˆ M relativeto the reference point cloud S along ω -axis.A similar approach is carried out with the other twoaxes which completes the estimation of all the translationparameters to populate transformation matrix T . The sameprocess is independently applied to every other point cloud M i to get a unique transformation matrix T i . Once all thetransformation matrices are estimated, the ELiD Map Υ canbe derived from Υ = M T + · · · + M N − T N − + S . (6)This abstract method of generating a single point cloudcombining multiple point clouds can be integrated into apractical implementation using robot operating system (ROS).Figure 4 illustrates the high level view of the proposed Fig. 4: A highlevel view of the proposed ROS based systemimplementation.system architecture to process, generate and export an ELiDMap using three different LiDAR sensors. Most commerciallyavailable LiDAR sensors provide an interface to integrate theirsensors into ROS. ROS also provides a platform to process andvisualize sensor data. This open source platform inherits fasterexecution of programs based on C++ programming languageand support GPU based applications.IV. R OBOT PATH PLANNING WITH EL I D M AP In this section, we consider a factory automation scenariowith the support of an ELiD Map. The map is used to plot thepath and steer a ground robot moving towards its destinationusing a minimal number of on-board sensors mounted. Theprocess begins by voxelizing the generated ELiD Map toreduce the resolution without losing the precision required forrobot navigation while avoiding obstacles. Then the voxelizedmap is used to estimate the shortest path between given twopoints on the ELiD Map using an algorithm based on breadthfirst search (BFS) and Dijkstra’s algorithm.Fig. 5: Overview of the path generation process.The estimated path is then translated into a set of basicnavigation commands such as direction and move for a fixedtime. The offline computed commands are then uploaded to therobot over a wireless link. Figure 5 illustrates the high levelverview of implementing the path planning and navigationprocess. V. R
ESULTS AND D ISCUSSION
An ELiD Map of a room generated with two Ouster OS1-16 LiDAR sensors using the proposed solution is shown inFigure 6. The black colored point cloud is from a LiDARplaced at location 1 while the red colored point cloud is froma LiDAR placed at location 2. LiDARs are positioned suchthat they have a linear offset along x-axis about two meters.Fig. 6: An ELiD Map using two point clouds (red and black).
A. Comparing merged point clouds with real layouts
We are using ROS Rviz tool to visualize point clouds [23].It allows measuring distances between points in real-time. Aset of such measurements are tabulated in Table I. The actuallength was measured using a measuring tape and the pointlength was measured in Rviz. The error column shows thedifference between the two measurements and the error rangesbetween to cm regardless of the distance from the LiDARsensors. Notice that this is a sufficient accuracy to utilize theELiD Map for identifying the position of an object in themapped environment which can be utilized in algorithms toimprove wireless connectivity in mmwave/sub THz region. B. Time consumption
Timing analysis was carried out combining two real-timeLiDAR data streams and playback of recorded streams toresemble a multi-LiDAR setup to investigate the effect ofincreasing LiDARs to the computation time which is de-picted in Figure 7. The red curve shows time consumptionfor calculating transformation matrix and the black curveshows the time consumption for concatenating multiple pointclouds. Timing values were calculated on a laptop runningUbuntu 18.04 LTS 64-bit operating system, with 2.3 GHz IntelCORE i5 processor and a 7.7 GB usable memory with nographics processing unit (GPU) support. These timing valueswere collected over a period of 30 seconds for each LiDARsetup. Concatenation curve has a higher gradient since theaccumulator has to handle an increasing number of points withan increasing number of LiDAR sensors. Transformation canbe estimated independently causing a lower gradient.A LiDAR sensor generates a point cloud data stream at arate of 20 Hz imposing a theoretical limit on the frequency TABLE I: Comparison between point cloud measurements andactual measurements
Scenario Actual length (m) Point length (m) Error (m)Minimum range 0.70 0.7502 0.0502Width of a cupboard 0.93 0.9860 0.0560Height of a cupboard 0.92 0.9995 0.0795Width of a room 3.12 3.1689 0.0489Length of a room 5.00 5.0951 0.0951 of ELiD Map generation. The proposed solution is able toprocess and transform point clouds from five different LiDARsensors within 50 ms. This is within the point cloud generationinterval of 50 ms. This timing value can be reduced usingmultiple threaded applications running in parallel with GPUsupport to merge point clouds in a binary tree pattern. Also, ina real implementation, such a multi-LiDAR system needs to beconnected using Ethernet hubs to feed a continuous collisionfree LiDAR data stream to the central processor which willalso add some delay. T i m e ( s ) Transform operationConcatenation
Fig. 7: Time consumption for merge and transform multiplepoint clouds.
C. LiDAR Setup time
The proposed solution is implemented as a two stepprocess. The first step is a one-time setup stage where thetransformation matrix is estimated for each point cloud. Thesecond stage is the operation stage where the concatenationtakes place. Once the setup stage is completed, the LiDARsensors are assumed to be fixed. The setup stage will re-quire manual intervention and takes about twelve minutes tocomplete. This includes tasks such as calibrating IMU sensorreadings (300 s), estimating rotation parameters (90 s) andtranslation parameters (300 s). This setup time can be improvedfurther by reducing the manual intervention required to selectspecific features from the point clouds. For this, a machinelearning based feature detection algorithm can be implementedto choose such points automatically.
D. Path planning with voxelized ELiD Map
A ground setup with cubicles building up a simple mazewas laid in a place in the university premises to demonstratethe application scenario. Two Ouster OS1-16 LiDAR sensorswere placed on the two corners of the room and an ELiDMap was generated using the proposed method. A KobukiTurtleBot was used as the ground robot. Figure 8 shows theresult of the calculated shortest voxel path in red betweentwo points marked as green and blue dots on the voxelizedELiD Map. The offset between real world measurements andig. 8: Planning shortest path in a point cloudvoxelized ELiD Map introduced as a result of the voxelizationwas taken into consideration while calculating the path. Thiswould ensure that the robot will avoid any collisions withobstacles when passing them by. As described in Section IV,the path is calculated offline and uploaded to the robot using aSecure SHell (SSH) connection established over a WiFi link.Figure 9 depicts an instance in the demonstration where therobot (highlighted in red) is utilizing the path informationgenerated to reach the destination. The blue circles representwhere the LiDAR sensors were placed.Fig. 9: An image of the setup where the robot is navigatingalong the path generated based on the ELiD MapVI. C
ONCLUSION
In this paper, we have focused on a novel approach of gen-erating an ELiD Map using integrated elevated sensors in theinfrastructure targeting factory automation that plays a key rolein enabling 6G evolution. First, we have generated the ELiDMap utilizing inherent properties of the LiDAR point clouds;the rings and the embedded IMU readings from the LiDARsensors. Then, we have carried out experiments showing thatthe proposed method provides an ELiD Map with an accuracyof 10 cm when compared to real-world measurements. Finally,we have demonstrated the usability of the generated ELiDMap with a practical implementation of path planning andnavigation of a robot. In this work, path planning was donein an offline manner for a static environment. As an extensionon the application, this can be improved where the centralprocessor detects dynamic obstacles in near real-time andcommunicates the necessary navigation commands to avoidsuch obstacles to the robot through a fast wireless link. R
EFERENCES[1] N. Rajatheva et al.
IEEE89th Vehicular Technology Conference , pp. 1–7, 04 2019.[5] N. Jayaweera, D. Marasinghe, N. Rajatheva, and M. Latva-Aho, “Fac-tory Automation: Resource Allocation of an Elevated LiDAR Systemwith URLLC Requirements,” in , pp. 1–5, 2020.[6] B. Holfeld, D. Wieruch, T. Wirth, L. Thiele, S. A. Ashraf, J. Huschke,I. Aktas, and J. Ansari, “Wireless Communication for Factory Automa-tion: an opportunity for LTE and 5G Systems,”
IEEE CommunicationsMagazine , vol. 54, no. 6, pp. 36–43, 2016.[7] V. Matiukas and D. Miniotas, “Point Cloud Merging for Complete3D Surface Reconstruction,”
Electronics And Electrical Engineering ,vol. 113, 09 2011.[8] Y. He, B. Liang, J. Yang, S. Li, and J. He, “An Iterative Closest PointsAlgorithm for Registration of 3D Laser Scanner Point Clouds withGeometric Features,”
Sensors , vol. 17, p. 1862, 08 2017.[9] Y. Liu, “Automatic Registration of Overlapping 3D Point Clouds usingClosest Points,”
Image and Vision Computing , vol. 24, 2006.[10] A. Nuchter, K. Lingemann, and J. Hertzberg, “Cached k-d tree searchfor icp algorithms,” , pp. 419–426, 08 2007.[11] Z. Pusztai, I. Eichhardt, and L. Hajder, “Accurate Calibration of Multi-LiDAR-Multi-Camera Systems,”
Sensors , vol. 18, p. 2139, 07 2018.[12] X. Gong, Y. Lin, and J. Liu, “3D LiDAR-Camera Extrinsic CalibrationUsing an Arbitrary Trihedron,”
Sensors (Basel, Switzerland) , vol. 13,pp. 1902–18, 02 2013.[13] S. Shi, Z. Wang, J. Shi, X. Wang, and H. Li, “From Points to Parts:3D Object Detection from Point Cloud with Part-aware and Part-aggregation Network,”
IEEE Transactions on Pattern Analysis andMachine Intelligence , vol. PP, pp. 1–1, 02 2020.[14] S. Rodriguez Florez, V. Fremont, and P. Bonnifait, “Extrinsic Cali-bration Between a Multi-layer LiDAR and a Camera,”
MFI 2008 -Multisensor Fusion and Integration for Intelligent Systems , 2008.[15] H. Alismail, D. Baker, and B. Browning, “Automatic Calibration of aRange s Sensor and Camera System,” , 10 2012.[16] A. Geiger, F. Moosmann, O. Car, and B. Schuster, “Automatic Cameraand Range Sensor Calibration using a Single Shot,” in , 2012.[17] C. Zhang, J. Wang, J. Li, and M. Yan, “2D Map Building and PathPlanning Based on LiDAR,” in , (Los Alamitos,CA, USA), pp. 783–787, IEEE Computer Society, jul 2017.[18] A. Mahmood and R. Bicker, “Path Planning, Motion Control andObstacle Detection of Indoor Mobile Robot,” 10 2016.[19] N. Vandapel, J. Kuffner, and O. Amidi, “Planning 3-D Path Networksin Unstructured Environments,” 05 2005.[20] C. Lartigue, Y. Quinsat, C. Mehdi-Souzani, A. Guarato, and S. Tabibian,“Voxel-based Path Planning for 3D Scanning of Mechanical Parts,”
Computer-Aided Design and Applications , vol. 11, 2013.[21] Ouster Inc., “OS1 Mid-Range High-Resolution Imaging Lidar.”https://data.ouster.io/downloads/OS1-lidar-sensor-datasheet.pdf.[22] M. A. Fischler and R. C. Bolles, “Random Sample Consensus: AParadigm for Model Fitting with Applications to Image Analysis andAutomated Cartography,”
Commun. ACM , vol. 24, no. 6, 1981.[23] H. Kam, S.-H. Lee, T. Park, and C.-H. Kim, “RViz: a toolkit for realdomain data visualization,”