R2LIVE: A Robust, Real-time, LiDAR-Inertial-Visual tightly-coupled state Estimator and mapping
11 R LIVE: A Robust, Real-time, LiDAR-Inertial-Visual tightly-coupledstate Estimator and mapping
Jiarong Lin, Chunran Zheng, Wei Xu, and Fu Zhang
Abstract —In this letter, we propose a robust, real-time tightly-coupled multi-sensor fusion framework, which fuses measure-ment from LiDAR, inertial sensor, and visual camera to achieverobust and accurate state estimation. Our proposed frameworkis composed of two parts: the filter-based odometry and factorgraph optimization. To guarantee real-time performance, weestimate the state within the framework of error-state iteratedKalman-filter, and further improve the overall precision withour factor graph optimization. Taking advantage of measurementfrom all individual sensors, our algorithm is robust enough tovarious visual failure, LiDAR-degenerated scenarios, and is ableto run in real-time on an on-board computation platform, asshown by extensive experiments conducted in indoor, outdoor,and mixed environment of different scale. Moreover, the resultsshow that our proposed framework can improve the accuracyof state-of-the-art LiDAR-inertial or visual-inertial odometry. Toshare our findings and to make contributions to the community,we open source our codes on our Github . I. I
NTRODUCTION
With the capacity of estimating ego-motion in six degreesof freedom (DOF) and simultaneously building dense and highprecision maps of surrounding environments, LiDAR-basedSLAM has been widely applied in the field of autonomousdriving vehicles [1], drones [2, 3], and etc. With the de-velopment of LiDAR technologies, the emergence of low-cost LiDARs (e.g., Livox LiDAR [4]) makes LiDAR moreaccessible. Following this trend, a number of related works[5]–[8] draw the attention of the community to this fieldof research. However, LiDAR-based SLAM methods easilyfail (i.e., degenerate) in those scenarios with few availablegeometry features, which is more critical for those LiDARswith small FoV [9]. In this work, to address the degenerationproblems of LiDAR-based odometry, we propose a LiDAR-inertial-visual fusion framework to obtain the state estimationof higher robustness and accuracy. The main contributions ofour work are: ‚ We take advantage of measurements from LiDAR, inertialand camera sensors and fuse them in a tight-coupledway. Experiments show that our method is robust enoughin various challenging scenarios with aggressive motion,sensor failure, and even in narrow tunnel-like environ-ments with a large number of moving objects and smallLiDAR field of view. ‚ We propose a framework with a high-rate filter-basedodometry and a low-rarte factor graph optimization. Thefilter-based odometry fuses the measurements of LiDAR,
J. Lin, C. Zheng, W. Xu and F. Zhang are with the Department ofMechanical Engineering, Hong Kong University, Hong Kong SAR., China. t jiarong.lin, zhengcr, wuweii, fuzhang u @hku.hk https://github.com/hku-mars/r2live Fig. 1: We use our proposed method to reconstruct a high precision,large scale, indoor-outdoor, dense 3D maps of the main building ofthe University of Hong Kong (HKU). The green path is the computedtrajectory and the 3D points are colored by height. inertial, and camera sensors within an error-state iteratedKalman filter to achieve real-time performance. The fac-tor graph optimization refines a local map of keyframeposes and visual landmark positions. ‚ By tightly fusing different types of sensors, we achievehigh-accuracy state estimation. Experiment results showthat our system is accurate enough to be used to re-construct large-scale, indoor-outdoor dense 3D maps ofbuilding structures (see Fig. 1).Our system is carefully engineered and open sourced tobenefit the whole robotics community.II. R ELATED WORK
In this section, we review existing works closely relatedto our work, including LiDAR-only odometry and mapping,LiDAR-Inertial fusion and LiDAR-Inertial-Visual methods.
A. LiDAR Odometry and Mapping
Zhang et al [10] first proposed a LiDAR odometry andmapping framework, LOAM, that combines ICP method [11]with point-to-plane and point-to-edge distance. It achievesgood odometry and mapping performance by running thetwo modules at different rates. To make the algorithm runin real time at a computation limited platform, Shan et al [12]propose a lightweight and ground-optimized LOAM (LeGO-LOAM), which discards unreliable features in the step ofground plane segmentation. These works are mainly basedon multi-line spinning LiDARs. Our previous work [9, 13]develop an accurate and robust algorithm by considering thelow-level physical properties of solid-state LiDARs with smallFOV. However, these methods solely based on LiDAR mea-surements and are very vulnerable to featureless environmentsor other degenerated scenarios. a r X i v : . [ c s . R O ] F e b B. LiDAR-Inertial Odometry
The existing works of LiDAR-Inertial fusion can be catego-rized into two classes: loosely-coupled and the tightly-coupled.Loosely-coupled methods deal with two sensors separatelyto infer their motion constraints while tightly-coupled ap-proaches directly fuse lidar and inertial measurements throughjoint-optimization. Compared with loosely-coupled methods,tightly-coupled methods show higher robustness and accuracy,therefore drawing increasing research interests recently. Forexample, authors in [14] propose LIOM which uses a graphoptimization based on the priors from LiDAR-Inertial odom-etry and a rotation-constrained refinement method. Comparedwith the former algorithm, LIO-SAM [15] optimizes a sliding-window of keyframe poses in a factor graph to achieve higheraccuracy. Similarly, Li et al . propose LiLi-OM [16] for bothconventional and solid-state LiDARs based on sliding windowoptimization. LINS [17] is the first tightly-coupled LIO thatsolves the 6 DOF ego-motion via iterated Kalman filtering.To lower the high computation load in calculating the Kalmangain, our previous work FAST-LIO [5] proposes a new formulaof the Kalman gain computation, the resultant computationcomplexity depends on the state dimension instead of measure-ment dimension. The work achieves up to 50 Hz odometry andmapping rate while running on embedded computers onboarda UAV.
C. LiDAR-Inertial-Visual Odometry
On the basis of LiDAR-Inertial methods, LiDAR-Inertial-Visual odometry incorporating measurements from visual sen-sors shows higher robustness and accuracy. In the work of[18], the LiDAR measurements are used to provide depthinformation for camera images, forming a system similarto RGB-D camera that can leverage existing visual SLAMwork such as ORB-SLAM [19]. This is a loosely-coupledmethod as it ignores the direct constraints on state imposedby LiDAR measurements. Zuo et al [20] propose a LIC-fusionframework combining IMU measurements, sparse visual fea-tures, and LiDAR plane and edge features with online spatialand temporal calibration based on the MSCKF framework,which is claimed more accurate and robust than state-of-the-art methods. In quick succession, their further work termedLIC-Fusion 2.0 [21] refines a novel plane-feature trackingalgorithm across multiple LiDAR scans within a sliding-window to make LiDAR scan matching more robust.To the best of our knowledge, our work is the first opensourced tightly-coupled LiDAR-inertial-visual fusion system.By fusing different types of sensor measurements, we achievestate estimation of higher accuracy and robustness. Extensiveresults show that our system is more accurate and robustthan state-of-the-art LiDAR-inertial and Visual-inertial fusionestimator (e.g., FAST-LIO [5], and VINS-mono [22]).III. T
HE OVERVIEW OF OUR SYSTEM
The overview of our system is shown in Fig. 2, where thefilter-based odometry taking advantage of the measurementsfrom LiDAR, camera and inertial sensor, estimates the state
LiDAR input10~50 Hz
IMU input
Camera input
Planar-feature extraction & motion compensation
Fast corners detection & KLT optical flow Plane-to-plane residual computation
PnP reprojection error
State propagation
LiDAR features maps E rr o r - s t a t e i t e r a t e d K a l m a n f il t e r upd a t e Priorestimation
LiDAR rate odometryIMU rateodometryCamera rateodometry
LiDARmeasurementVisualmeasurementLiDARmeasurementVisualmeasurement
Tracked Factor graph optimization
AppendRetrive
Marginalization
Factors within the sliding window
Factors within the sliding window
Filter-based odometry
Project R LIVE system overview
Visual landmarks
Triangulation
Fig. 2: The overview of our proposed method. within the framework of error-state iterated Kalman filteras detailed in Section IV. To further improve the visualmeasurements, we leverage the factor graph optimization torefine the visual landmarks within a local sliding window asdetailed in Section V.IV. F
ILTER - BASED ODOMETRY
A. The boxplus “ ‘ ” and boxminus “ a ” operator In this paper, we make use of the “ ‘ ” and “ a ” operationsencapsulated on a manifold M to simplify the notations andderivations. Let M be the manifold on which the systemstate lies. Since a manifold is locally homeomorphic to R n ,where n is the dimension of M , we can use two operators,“ ‘ ” and “ a ”, establishing a bijective map between the localneighborhood of M and its tangent space R n [23]: ‘ : M ˆ R n Ñ M , a : M ˆ M Ñ R n (1)For the compound manifold M “ SO p q ˆ R n , we have: „ Ra ‘ „ ra fi „ R ¨ Exp p r q a ` a , „ R a a „ R a fi „ Log p R T R q a ´ a where r P R , a , a P R n , Exp p¨q and
Log p¨q denote theRodrigues’ transformation between the rotation matrix androtation vector . B. Continuous-time kinematic model
In our current work, we assume that the time offset amongall the three sensors, LiDAR, camera, and IMU, are pre-calibrated. Furthmore, we assume the extrinsic between Li-DAR and IMU are known as they are usually integrated andcalibrated in factory, but estimate the camera IMU extrinsiconline. Moreover, a LiDAR typically scans points sequentially,and points in a LiDAR frame could be measured at differentbody pose. This motion is compensated by IMU back prop-agation as shown in [5], hence points in a LiDAR frame areassumed to be measured at the same time. With these, theinput data sequences of our system can be simplified into Fig.3. https://en.wikipedia.org/wiki/Rodrigues%27 rotation formula CameraLiDARIMUIMU
Visual measurement
LiDAR measurement
IMU propagationWithout visual/LiDAR measurement k-2 k-1 k k+1 k+2 k+3k-2 k-1 k k+1 k+2 k+3 M a r g i n a li z a t i o n IMU preintegration
Keypoints constrain
Camera pose
LiDAR poseVisual LanmarksVisual Lanmarks
Sliding window k-2 k-1 k k+1 k+2 k+3i i+1i-1 i i+1i-1
Fig. 3: Illustration of the input data sequences, where the frame rate ofIMU, camera, and LiDAR is 200 Hz, 20 Hz and 10 Hz, respectively.The notation i denotes the index of IMU data while k denotes theindex of LiDAR or camera measurements. We assume that the IMU, LiDAR, and camera sensorsare rigidly attached together with the extrinsic between Li-DAR and IMU (LiDAR frame w.r.t. IMU frame) as I T L “p I R L , I p L q , and the extrinsic between camera and IMU(camera frame w.r.t. IMU frame) is I T C “ p I R C , I p C q . Forthe sake of convenience, we take IMU as the body frame,which leads to the following continuous kinematic model: G p I “ G v I , G v I “ G R I p a m ´ b a ´ n a q ` G g , G R I “ G R I r ω m ´ b g ´ n g s ˆ , b g “ n bg , b “ n ba (2)where G p¨q denotes a vector represented in the global frame(i.e. the first gravitational aligned IMU frame [22]), G R I and G p I are the attitude and position of the IMU relative to theglobal frame, G g is the gravitational acceleration, ω m and a m are the raw gyroscope and accelerometer readings, n a and n g are the white noise of IMU measurement, b a and b g are thebias of gyroscope and accelerometer, which are modelled asrandom walk driven by Gaussian noise n bg and n ba . C. Discrete IMU model
We discretize the continuous model (2) at the IMU rate. Let x i be the state vector at the i -th IMU measurement: x i “ “ G R TI i G p TI i I R TC i I p TC i G v Ti b T g i b T a i ‰ T Discretizing (2) by zero-order holder (i.e., IMU measurementsover one sampling time period ∆ t are constant), we obtain x t ` “ x i ‘ p ∆ t f p x i , u i , w i qq (3)where u i “ “ ω Tm i a Tm i ‰ T , w i “ “ n T g i n T a i n T bg i n T ba i ‰ T f p x i , u i , w i q “ »————————– ω m i ´ b g i ´ n g i G v i ˆ ˆ G R I i p a m i ´ b a i ´ n g i q ´ G gb g i b a i fiffiffiffiffiffiffiffiffifl D. Propagation
In our work, we leverage an on-manifold iterated error stateKalman filter [24] to estimate the state vector x i , in which thestate estimation error δ ˆ x i is characterized in the tangent spaceof the state estimate ˆ x i : δ ˆ x i fi x i a ˆ x i “ ” G δ ˆ r TI i G δ ˆ p TI i I δ ˆ r TC i I δ ˆ p TC i G δ ˆ v Ti δ ˆ b T g i δ ˆ b T a i ı T „ N p ˆ , Σ δ ˆ x i q (4) CameraLiDARIMUIMU
Visual measurement
LiDAR measurement
IMU propagationWithout visual/LiDAR measurement k-2 k-1 k k+1 k+2 k+3k-2 k-1 k k+1 k+2 k+3 M a r g i n a li z a t i o n IMU constrain
Keypoints constrain
IMU constrain
Keypoints constrain
Camera poseLiDAR poseCamera poseLiDAR poseVisual LanmarksVisual Lanmarks
Sliding window k-2 k-1 k k+1 k+2 k+3t t+1t-1
Fig. 4: The illustraction of the update of our error-state iteratedKalman filter.
Note that δ ˆ x i P R is in minimum dimension (the systemdimension 21) and is a random vector with covariance Σ δ ˆ x i . G δ ˆ r TI i and I δ ˆ r TC i are: G δ ˆ r I i “ Log p G ˆ R TI i G R TI i q , I δ ˆ r C i “ Log p I ˆ R TC i I R C i q Once receiving a new IMU measurement, the state estimateis propagated by setting the process noise in (3) to zero: ˆ x i ` “ ˆ x i ‘ p ∆ t ¨ f p ˆ x i , u i , qq . (5)The associated estimation error is propagated in the lin-earized error space as follows (see [24] for more details): δ ˆ x i ` “ x i ` a ˆ x i ` (6) “ p x i ‘ p ∆ t ¨ f p x i , u i , w i qqq a p ˆ x i ‘ p ∆ t ¨ f p ˆ x i , u i , qqq„ N p ˆ , Σ δ ˆ x i ` q where: Σ δ ˆ x i ` “ F δ ˆ x Σ δ ˆ x i F Tδ ˆ x ` F w QF T w (7) F δ ˆ x “ B p δ ˆ x i ` qB δ ˆ x i ˇˇˇˇ δ ˆ x i “ , w i “ , F w “ B p δ ˆ x i ` qB w i ˇˇˇˇ δ ˆ x i “ , w i “ with their exact values computed in Appendix. A.The two propagation in (5) and (7) starts from the optimalstate and covariance estimate after fusing the most recentLiDAR/camera measurement (e.g., the k -th measurement,see Section IV-I), and repeat until receiving the next Li-DAR/camera measurement (e.g., the p k ` q -th measurement).The relation between time index i and k is shown in Fig. 3. E. The prior distribution
Let the two propagation in (5) and (7) stop at the p k ` q -thLiDAR/camera measurement (see Fig. 4), and the propagatedstate estimate and covariance are ˆ x k ` and Σ δ ˆ x k ` , respec-tively. They essentially impose a prior distribution for the state x k ` before fusing the p k ` q -th measurement as below: x k ` a ˆ x k ` „ N p , Σ δ ˆ x k ` q . (8) F. Initialization of iterated update
The prior distribution in (8) will be fused with the LiDARor camera measurements to produce a maximum a-posterior(MAP) estimate (denoted as ˇ x k ` ) of x k ` . The MAP es-timate ˇ x k ` is initialized as the prior estimate ˆ x k ` and isrefined iteratively due to the nonlinear nature of the problem.In each iteration, the error δ ˇ x k ` between the true state x k ` and the current estimate ˇ x k ` , defined as δ ˇ x k ` fi x k ` a ˇ x k ` , (9) will be solved by minimizing the posterior distribution con-sidering the prior in (8) and LiDAR/visual measurements.Therefore, the prior distribution in terms of x k ` representedby (8) should be transformed to an equivalent prior distributionin terms of δ ˇ x k ` : x k ` a ˆ x k ` “ p ˇ x k ` ‘ δ ˇ x k ` q a ˆ x k ` « ˇ x k ` a ˆ x k ` ` H δ ˇ x k ` „ N p , Σ δ ˆ x k ` q , (10)where H “ p ˇ x k ` ‘ δ ˇ x k ` q a ˆ x k ` B δ ˇ x k ` | δ ˇ x k ` “ is computed in detail in Appendix. B, (10) essentially imposesa prior distribution to δ ˇ x k ` as below: δ ˇ x k ` „ N p´ H ´ p ˇ x k ` a ˆ x k ` q , H ´ Σ δ ˆ x k ` H ´ T q (11) G. LiDAR measurement
If the p k ` q -th measurement is a LiDAR frame, we extractplanar feature points from the raw 3D points as in [5] andcompensate the in-frame motion as in Section IV-B. Denote L k ` the set of feature points after motion compensation, wecompute the residual of each feature point L p j P L k ` where j is the index of feature point and the superscript L denotesthat the point is represented in the LiDAR-reference frame.With ˇ x k ` being the current estimate of x k ` , we cantransform L p j from LiDAR frame to the global frame G p j “ G ˇ R I k ` p I R LL p j ` I p L q ` G ˇ p I k ` . As the previous LOAMpipeline does in [5, 9], we search for the nearest planar featurepoints in the map and use them to fit a plane with normal u j and an in-plane point q j , the measurement residual is: r l p ˇ x k ` , L p j q “ u Tj ` G p j ´ q j ˘ (12)Let n j be the measurement noise of the point L p j , we canobtain the true point location L p gt j by compensating the noisefrom L p j : L p j “ L p gt j ` n j , n j „ N p , Σ n j q . (13)This true point location together with the true state x k ` should lead to zero residual in (12), i.e., “ r l p x k ` , L p gt j q “ r l p ˇ x k ` , L p j q ` H lj δ ˇ x k ` ` α j , (14) which constitutes a posteriori distribution for δ ˇ x k ` . In (14), x k ` is parameterized by its error δ ˇ x k ` defined in (9) and α j „ N p , Σ α j q : H lj “ B r l p ˇ x k ` ‘ δ ˇ x k ` , L p j qB δ ˇ x k ` | δ ˇ x k ` “ Σ α j “ F p j Σ n j F T p j (15) F p j “ ˜ B r l p ˇ x k ` , L p j qB L p j ¸ “ G ˇ R I k ` I R L The detailed computation of H lj can be found in Appendix.C. H. Visual measurement
If the p k ` q -th frame is a camera image, we extract theFAST corner feature points C k ` from the undistorted imageand use KLT optical flow to track feature points in C k ` seenby keyframes in the current sliding window (Section V). Ifa feature point in C k ` is lost or has not been yet trackedbefore, we triangulate the new feature point in 3D space (visuallandmarks) with the optimal estimated camera poses.The reprojection errors between visual landmarks and itstracked feature points in the p k ` q -th frame are used forupdating the current state estimate ˇ x k ` . For an extractedcorner point C p s “ “ u s v s ‰ T P C k ` where s is the indexof corner point, its correspondence landmark in 3D space isdenoted as G P s , then the measurement residual of C p s is: C P s “ ` G ˇ R I k ` I ˇ R C k ` ˘ T G P s ´ ` I ˇ R C k ` ˘ T G ˇ p I k ` ´ I ˇ p C k ` r c ` ˇ x k ` , C p s , G P s ˘ “ C p s ´ π p C P s q (16)where π p¨q is the pin-hole projection model.Now considering the measurement noise, we have: G P s “ G P gt s ` n P s , n P s „ N p , Σ n P s q (17) C p s “ C p gt s ` n p s , n p s „ N p , Σ n p s q (18)where G P gt s and C p gt s are the true value of G P s and C p s ,respectively. With these, we obtain the first order Taylorexpansion of the true zero residual r c p x k ` , C p gt s q as: “ r c p x k ` , C p gt s , G P gt s q« r c ` ˇ x k ` , C p s , G P s ˘ ` H cs δ ˇ x k ` ` β s , (19)which constitutes another posteriori distribution for δ ˇ x k ` . In(19), β s „ N p , Σ β s q and: H cs “ B r c p ˇ x k ` ‘ δ ˇ x k ` , C p s , G P s qB δ ˇ x k ` | δ ˇ x k ` “ Σ β s “ Σ n p s ` F P s Σ P s F P s T (20) F P s “ B r c p ˇ x k ` , C p s , G P s qB G P s The detailed computation of H cs and F P s is given in appendix.D I. Update of error-state iterated Kalman filter
Combining the prior distribution (11), the posterior distri-bution due to LiDAR measurement (14) and the posteriordistribution due to visual measurement (19), we obtain themaximum a posterior (MAP) estimation of δ ˇ x k ` : min δ ˇ x k ` ˆ } ˇ x k ` a ˆ x k ` ` H δ ˇ x k ` } Σ ´ δ ˆ x k ` ` ÿ m l j “ ››› r l p ˇ x k ` , L p j q ` H lj δ ˇ x k ` ››› Σ ´ α j ` ÿ m c s “ ››› r c p ˇ x k ` , C p s , G P s q ` H cs δ ˇ x k ` ››› Σ ´ β s ¸ where } x } Σ “ xΣx T . Notice that the measurements ofLiDAR and camera may not appear at the same time instant (see Fig. 4), therefore m l (or m c ) could be zero in the aboveoptimization. Denote H T “ “ H l , . . . , H lm l , H c T , . . . , H cm c ‰ T R “ diag p Σ α , . . . , Σ α ml , Σ β , . . . , Σ β mc q ˇ z Tk ` “ ” r l p ˇ x k ` , L p q , . . . , r l p ˇ x k ` , L p m l q , r c p ˇ x k ` , C p , G P q , . . . , r c p ˇ x k ` , C p m c , G P m c q ı P “ p H q ´ Σ δ ˆ x k ` p H q ´ T (21) Following [5], we have the Kalman gain computed as: K “ ` H T R ´ H ` P ´ ˘ ´ H T R ´ (22)Then we can update the state estimate as: ˇ x k ` “ ˇ x k ` ‘ ` ´ K ˇ z k ` ´ p I ´ KH q p H q ´ p ˇ x k ` a ˆ x k ` q ˘ The above process (Section IV-G to Section IV-I) is iterateduntil convergence (i.e., the update is smaller than a giventhreshold). The converged state estimate is then used to (1)project points in the the new LiDAR frame to the worldframe and append them to the existing point cloud map; (2)triangulate new visual landmarks of the current frame if it isa keyframe; (3) serve as the starting point of the propagationin Section IV-D for the next cycle: ˆ x k ` “ ˇ x k ` , ˆ Σ δ ¯ x k ` “ p I ´ KH q ˇ Σ δ x k ` V. F
ACTOR GRAPH OPTIMIZATION
As mentioned in Section IV-I, untracked visual landmarksin the newly added keyframe are triangulated to create newvisual landmarks. This triangulation is usually of low precisiondue to keyframe pose estimation error. To further improve thequality of visual landmarks, keyframe poses, and simultane-ously calibrate the time offset between the camera and LiDAR-IMU subsystem, we leverage a factor graph optimization foroptimizing the camera-poses and the visual landmarks withina sliding window of image keyframes.
CameraLiDARIMUIMU
Visual measurement
LiDAR measurement
IMU propagationWithout visual/LiDAR measurement k-2 k-1 k k+1 k+2 k+3k-2 k-1 k k+1 k+2 k+3 M a r g i n a li z a t i o n IMU preintegration
Keypoints constrain
Camera pose
LiDAR poseVisual LanmarksVisual Lanmarks
Sliding window k-2 k-1 k k+1 k+2 k+3t t+1t-1 t t+1t-1
Fig. 5: Our factor graph optimization.
Our factor graph optimization is similar to VINS-Mono[22], but further incorporates pose constraints due to LiDARmeasurements as shown in Fig. 5. Constraints due to IMUpreintegration are also included to connect the LiDAR factorwith camera factor. To keep the back-end optimization light-weight, the LiDAR poses in the pose graph are fixed and theLiDAR raw point measurements are not engaged in the posegraph optimization. (b)(a)
LiDAR Camera
D-GPS RTK based station
D-GPS RTK mobilestation
Battery
On-board PC
Fig. 6: Our handheld device for data sampling, (a) shows ourminimum system, with a total weight . Kg; (b) a D-GPS RTKsystem is used to evaluate the accuracy.
VI. E
XPERIMENTS AND R ESULTS
A. Our device for data sampling
Our handheld device for data sampling is shown in Fig.6 (a), which includes the power supply unit, the onboard DJI manifold-2c computation platform (equipped with an Intel i7-8550u
CPU and 8 GB RAM), a global shutter camera, and a
LiVOX AVIA LiDAR. The FoV of the camera is . ˝ ˆ . ˝ while the FoV of LiDAR is . ˝ ˆ . ˝ . For quantitativelyevaluating the precision of our algorithm (our experiment inSection. VI-E), we install a differential-GPS (D-GPS) real-time kinematic (RTK) system on our device, shown in Fig.6 (b). (a) Blocking the camera Blocking the LiDAR (c) (b)(d) (e) (f)
Fig. 7: We evaluate the robustness of our algorithm under scenarioswith the aggressive motion (sequence in a „ d) and sensor failure byintentionally blocking the camera (e) and LiDAR sensor (f). −150−100−50050100 A n g l e ( ∘ ) Ang_x Ang_y Ang_z Aggressive Block camera Block LiDAR -6.00-5.00-4.00-3.00-2.00-1.000.001.00 P o s ( m ) Pos_x Pos_y Pos_z Aggressive Block camera Block LiDAR
Time (s) −400−2000200 G y r o s c o p e ( ∘ / s ) Gyr_x Gyr_y Gyr_z Aggressive Block camera Block LiDAR
Aggressive motion and robustness testing
Fig. 8: Our estimated pose and the raw gyroscope reading ofExperiment-1. The shaded area in blue, yellow and red representdifferent phases of aggressive motion, camera-failure and LiDAR-failure, respectively. −400 −300 −200 −100 0050100150200250300 The estiated trajectories in Experiment-2 R2LIVEVins-MonoFast-LIO R2LIV EndVins-Mono EndFast-Lio End
C2 C1 B2 B1A1 A2B2 (a) (b) (c)
Fig. 9: We evaluate our algorithm in a Hong Kong MTR station consisting of cluttered lobby and very long narrow tunnels, as shown in(a). The tunnel is up to meters long and is filled with moving pedestrians, making it extremely challenging for both LiDAR-basedand camera-based SLAM methods. (b): the map built by our system is well aligned with the street map of MTR station. (c) Trajectorycomparison among our system “R2LIVE”, the LiDAR-inertial system “Fast-LIO” , and visual-inertial system “VINS-Mono” and (our). Thestarting point is marked with while the ending point of each trajectory is marked with ‹ . “VINS-Mono” stopped at middle way due tothe failure of feature tracking. B. Experiment-1: Robustness evaluation with aggressive mo-tion and sensor failure
In this experiment, we evaluate the robustness of our algo-rithm under the scenario with aggressive motion (see Fig. 7(a „ d)), in which the maximum angular speed reaches up to ˝ { s (see the gyroscope readings in Fig. 8). In addition, wealso simulate drastic sensor failures by intentionally blockingthe camera (see Fig. 7(e)) and LiDAR (see Fig. 7(f)).The results of our estimated attitude and position are shownin Fig. 8, where we use shade the different testing phaseswith different colors. As shown in Fig. 8, we can see that ourestimated trajectory can tightly track the actual motion evenin severe rotation and translation. Even when the camera orLiDAR provides no measurements, the estimated trajectory issill very smooth and exhibits no noticeable degradation. Theseresults show that our algorithm is robust enough to endurewith the case with aggressive motion or even with the failureof sensors. We refer readers to the accompanying video on XXX showing details of the experiment in practice.
C. Experiment-2: Robustness evaluation in a narrow tunnel-like environments with moving objects
In this experiment, we challenge one of the most difficultscenarios in the scope of camera-based and LiDAR-basedSLAM, a MTR station, the
HKU station . It is a typical narrowtunnel-like environment (see the left figure of Fig. 9(a)), withthe maximum reaches meters. Moreover, there are manymoving pedestrians frequently showing in the sensor views.All these factors make the SLAM extremely challenging: 1)the long tunnel structure significantly reduces the geometricfeatures for LiDAR-based SLAM, especially when walkingalong the tunnel or turning the LiDAR around at one end ofthe tunnel; 2) The highly repeated visual feature (pattern onthe floor) causes the error in matching and tracking the visualfeatures in visual SLAM; 3) The many moving pedestrianscould cause outliers to the already few point- and visual-features.Despite these challenges, our system is robust enough tosurvive in this scene. In Fig. 9 (b), we align our point clouddata with the HKU station street map , and find them matchtightly. This demonstrates that our localization is of high https://en.wikipedia.org/wiki/HKU station robustness and accuracy. Moreover, we plot our estimatedtrajectory together with that of Vins-Mono and Fast-LIO inFig. 9 (c), where we can see that our method achieves the bestoverall performance in this experiment. −80 −60 −40 −20 0 20 40 60−40−200204060 R2LIVEGround trueFast-LioVINS-Mono−60 −50 −40 −30−40−30−20 The comparison of trajectories in Experiment-4
Time (s) −100−50050100 G y r o s c o p e r e a d i n g ( ∘ / s ) Gyr_x Gyr_y Gyr_z
Fig. 10: The upper figure plots the different trajectories inexperiments-4, while the lower figure shows the raw gyroscopereading in the experiment.
50 100 150 200 250 300
Distance travaled (m) R e l a t i v e p o s i t i o n e rr o r ( % ) Vins-Mono Fast-LIO R2LIVE
Relative pose error of Experiment-4
Fig. 11: The relative pose error in experiments-4, for the se-quence of meters, the median of pose error of “Vins-Mono”,“Fast-LIO” and “R2LIVE” are ( . ˝ , . ), ( . ˝ , . ) and( . ˝ , . % ) respectively. https://github.com/HKUST-Aerial-Robotics/VINS-Mono https://github.com/hku-mars/FAST LIO/ b Indoorac Backward Forward(a)(b)(c) (d) (e)
Fig. 12: The reconstructed 3D maps in Experiment-3 are shown in (d), and the detail point cloud with the correspondence panorama imagesare shown in (a) and (b). (c) show that our algorithm can close the loop itself (returning the starting point) without any additional processing(e.g. loop closure). In (e), we merge our maps with the satellite image to further examine the accuracy of our system.
D. Experiment-3: High precision maps building in large-scaleindoor & outdoor urban environment
In this experiment, we show that our proposed method isaccurate enough to reconstruct a dense 3D, high precision,large-scale indoor-outdoor map of urban environments. Wecollect data of the
HKU main building exterior and interior.The real-time reconstructed 3D maps is shown in Fig. 12,in which the clear and high-quality point cloud demonstratesthat our proposed method is of high accuracy. What worth tobe mentioned is, without any additional processing (i.e. loopclosure), our algorithm can still close the loop itself (see Fig.12 (c)) after traversing meters, which demonstrates thatour proposed method is extremely low drift. Finally, we mergeour maps together with the satellite image and find them matchtightly (see Fig. 12 (e)).
E. Experiment-4: Quantitative evaluation of precision usingD-GPS RTK
In this experiment, we quantitatively evaluate the precisionof Vins-Mono (IMU+Camera), Fast-LIO (IMU+LiDAR) andour algorithm by comparing their estimated trajectory withthe ground truth trajectory (see the upper figure of Fig. 10)obtained from a real-time differential-GPS (D-GPS) Kinematic(RTK). The data has the maximum angular velocity reaching ˝ { s (see the gyroscope reading in Fig. 10). We calculatetranslational and rotational errors for all possible subsequencesof length (50,...,300) meters, with the relative pose error (RPE)among these methods shown in Fig. 11. F. Run time analysis
The average time consumption in experiment 1 „ LIVE can achievereal-time on both the desktop PC and embedded computationplatform. Noticed that the factor graph optimization is run ona separate thread and therefore is allowed to run at a lowerrate. VII. C
ONCLUSION
In this letter, we propose a robust, real-time LiDAR-inertial-visual fusion framework based on a high-rate filter-based odometry and factor graph optimization. We fuse the
PC/on-board Exp-1 (ms) Exp-2 (ms) Exp-3 (ms) Exp-4 (ms)LI-Odom 8.81 / 15.98 11.54 / 25.30 14.91 / 30.64 10.92 / 24.37VI-Odom 7.84 / 13.92 8.84 / 19.10 9.57 / 19.56 8.99 / 20.16FG-OPM 26.10 / 45.35 30.20 / 65.25 29.25 / 58.08 27.98 / 60.43
TABLE I: The average running time of R2LIVE in Experiment-1 „ „
4) on desktop PC (with Intel i7-9700K CPU and 32GBRAM) and on-board computed (with Intel i7-8550u CPU and 8GBRAM). The items ”LI-Odom”, ”VI-Odom” and ”FG-OPM” are theaverage time consumption of LiDAR-IMU filter-based odometry,Visual-Inertial filter-based odometry, and factor graph optimization,respectively. measurements of LiDAR, inertial, and camera sensors withinan error-state iterated Kalman filter and use the factor graphoptimization to refine the local map that in a sliding win-dow of image keyframes and visual landmarks. Our systemwas tested in large-scale, indoor-outdoor, and narrow tunnel-like environments, challenging sensor failure and aggressivemotion. In all the tests, our method achieves a high level ofaccuracy and robustness in localization and mapping.R
EFERENCES[1] J. Levinson, J. Askeland, J. Becker, J. Dolson, D. Held, S. Kammel, J. Z.Kolter, D. Langer, O. Pink, V. Pratt, et al. , “Towards fully autonomousdriving: Systems and algorithms,” in . IEEE, 2011, pp. 163–168.[2] A. Bry, A. Bachrach, and N. Roy, “State estimation for aggressiveflight in gps-denied environments using onboard sensing,” in . IEEE, 2012,pp. 1–8.[3] F. Gao, W. Wu, W. Gao, and S. Shen, “Flying on point clouds:Online trajectory generation and autonomous navigation for quadrotorsin cluttered environments,”
Journal of Field Robotics , vol. 36, no. 4, pp.710–733, 2019.[4] Z. Liu, F. Zhang, and X. Hong, “Low-cost retina-like robotic lidars basedon incommensurable scanning,”
IEEE Transactions on Mechatronics ,2021, in press. [Online]. Available: https://arxiv.org/pdf/2006.11034.pdf[5] W. Xu and F. Zhang, “Fast-lio: A fast, robust lidar-inertialodometry package by tightly-coupled iterated kalman filter,”
IEEERobotics and Automation Letters , 2021, in press. [Online]. Available:https://arxiv.org/pdf/2010.08196.pdf[6] J. Lin, X. Liu, and F. Zhang, “A decentralized framework for simultane-ous calibration, localization and mapping with multiple lidars,” in . IEEE, 2020, pp. 4870–4877.[7] Z. Liu and F. Zhang, “Balm: Bundle adjustment for lidar mapping,”
IEEE Robotics and Automation Letters , 2021, in press. [Online].Available: https://arxiv.org/pdf/2010.08215.pdf[8] X. Liu and F. Zhang, “Extrinsic calibration of multiple lidars of smallfov in targetless environments,”
IEEE Robotics and Automation Letters ,2021, in press. [9] J. Lin and F. Zhang, “Loam livox: A fast, robust, high-precision lidarodometry and mapping package for lidars of small fov,” in . IEEE,2020, pp. 3126–3131.[10] J. Zhang and S. Singh, “Loam: Lidar odometry and mapping in real-time.” in
Robotics: Science and Systems , vol. 2, no. 9, 2014.[11] K.-L. Low, “Linear least-squares optimization for point-to-plane icpsurface registration,”
Chapel Hill, University of North Carolina , vol. 4,no. 10, pp. 1–3, 2004.[12] T. Shan and B. Englot, “Lego-loam: Lightweight and ground-optimizedlidar odometry and mapping on variable terrain,” in .IEEE, 2018, pp. 4758–4765.[13] J. Lin and F. Zhang, “A fast, complete, point cloud based loop closure forlidar odometry and mapping,” arXiv preprint arXiv:1909.11811 , 2019.[14] H. Ye, Y. Chen, and M. Liu, “Tightly coupled 3d lidar inertial odometryand mapping,” in . IEEE, 2019.[15] T. Shan, B. Englot, D. Meyers, W. Wang, C. Ratti, and D. Rus, “Lio-sam:Tightly-coupled lidar inertial odometry via smoothing and mapping,” arXiv preprint arXiv:2007.00258 , 2020.[16] K. Li, M. Li, and U. D. Hanebeck, “Towards high-performancesolid-state-lidar-inertial odometry and mapping,” arXiv preprintarXiv:2010.13150 , 2020.[17] C. Qin, H. Ye, C. E. Pranata, J. Han, S. Zhang, and M. Liu, “Lins: Alidar-inertial state estimator for robust and efficient navigation,” in .IEEE, 2020, pp. 8899–8906.[18] Y. Zhu, C. Zheng, C. Yuan, X. Huang, and X. Hong, “Camvox: Alow-cost and accurate lidar-assisted visual slam system,” arXiv preprintarXiv:2011.11357 , 2020.[19] R. Mur-Artal and J. D. Tard´os, “Orb-slam2: An open-source slamsystem for monocular, stereo, and rgb-d cameras,”
IEEE Transactionson Robotics , vol. 33, no. 5, pp. 1255–1262, 2017.[20] X. Zuo, P. Geneva, W. Lee, Y. Liu, and G. Huang, “Lic-fusion: Lidar-inertial-camera odometry,” in . IEEE, 2019, pp. 5848–5854.[21] X. Zuo, Y. Yang, J. Lv, Y. Liu, G. Huang, and M. Pollefeys, “Lic-fusion2.0: Lidar-inertial-camera odometry with sliding-window plane-featuretracking,” in
IROS 2020 , 2020.[22] T. Qin, P. Li, and S. Shen, “Vins-mono: A robust and versatile monocularvisual-inertial state estimator,”
IEEE Transactions on Robotics , vol. 34,no. 4, pp. 1004–1020, 2018.[23] C. Hertzberg, R. Wagner, U. Frese, and L. Schr¨oder, “Integrating genericsensor fusion algorithms with sound state representations through encap-sulation of manifolds,”
Information Fusion , vol. 14, no. 1, pp. 57–77,2013.[24] D. He, W. Xu, and F. Zhang, “Embedding manifold structures intokalman filters,” arXiv preprint arXiv:2102.03804 , 2021.[25] J. Sola, “Quaternion kinematics for the error-state kalman filter,” arXivpreprint arXiv:1711.02508 , 2017.[26] T. D. Barfoot,
State estimation for robotics . Cambridge UniversityPress, 2017. A PPENDIX
A. Computation of F δ ˆ x and F wF δ ˆ x “ B p δ ˆ x i ` qB δ ˆ x i ˇˇˇˇ δ ˆ x i “ , w i “ “ »———————– Exp p´ ˆ ω i ∆ t q ´ J r p ˆ ω i ∆ t q T
00 I 0 0 I ∆ t ´ G ˆ R Ii r ˆ a i s ˆ ∆ t ´ G ˆ R Ii ∆ t fiffiffiffiffiffiffiffifl F w “ B p δ ˆ x i ` qB w i ˇˇˇˇ δ ˆ x i “ , w i “ “ »———————– ´ J r p ˆ ω i ∆ t q T ´ G ˆ R Ii ∆ t ∆ t
00 0 0 I ∆ t fiffiffiffiffiffiffiffifl where ˆ ω i “ ω m i ´ b g i , ˆ a i “ a m i ´ b a i , and J r p¨q are calledthe right Jacobian matrix of SO p q : J r p r q “ I ´ ´ cos || r |||| r || r r s ˆ ` || r || ´ sin p|| r ||q|| r || r r s ˆ , r P R For the detailed derivation of F ˇ δ x and F w , please refer tothe Section. B of our supplementary material. B. The computation of HH “ p ˇ x k ` ‘ δ ˇ x k ` q a ˆ x k ` B δ ˇ x k ` | δ ˇ x k ` “ “ »————– A 0 0 0 0 ˆ ˆ ˆ ˆ ˆ fiffiffiffiffifl where the ˆ matrix A “ J ´ r p Log p G ˆ R I k ` T G ˇ R I k ` qq and B “ J ´ r p Log p I ˆ R C k ` T I ˇ R C k ` qq . J ´ r p¨q are called the inverse right Jacobian matrix of SO p q : J ´ r p r q “ I ` r r s ˆ ` ˆ || r || ´ ` cos p|| r ||q || r || sin p|| r ||q ˙ r r s ˆ (23) C. The computation of H lj H lj “ u Tj “ ´ G ˇ R I k ` r P a s ˆ I ˆ ˆ ‰ where P a “ I R LL p j ` I p L .For the detailed derivation of H lj , please refer to the Section.D of our supplementary material. D. The computation of H cs and F P s H cs “ ´ F A ¨ F B F P s “ ´ F A ¨ F C with: F A “ C P sz »——– f x ´ f x C P sxC P sz f y ´ f y C P syC P sz fiffiffifl (24) F B “ “ M A M B M C ´ I 0 ˆ ‰ F C “ ` G ˇ R I k ` I ˇ R C ˘ T (25)where f x and f y are the focal length, c x and c y are theprincipal point offsets in image plane, and the ˆ matrix M A , M B and M C are: M A “ ` I ˇ R C ˘ T „´ G ˆ R I k ` ¯ T G P s ˆ M B “ ´ ´ I ˆ R C ¯ T M C “ „´ G ˆ R I k ` I ˆ R C ¯ T G P s ˆ ´ „´ I ˆ R C ¯ T G ˆ p iI k ` ˆ For the detailed derivation of H cs and F P s , please refer toSection. E of our supplementary material. Supplementary Material: R LIVE: A Robust,Real-time, LiDAR-Inertial-Visualtightly-coupled state Estimator and mapping
A. Perturbation on SO p q In this appendix, we will use the following approximationof perturbation δ r Ñ on SO p q [25, 26]: Exp p r ` δ r q « Exp p r q Exp p J r p r q δ r q Exp p r q Exp p δ r q « Exp p r ` J ´ r p r q δ r q R ¨ Exp p δ r q ¨ u « R ` I ` r δ r s ˆ ˘ u “ Ru ´ R r u s ˆ δ r where u P R and we use r¨s ˆ denote the skew-symmetricmatrix of vector p¨q ; J r p r q and J ´ r p r q are called the right Ja-cobian and the inverse right Jacobian of SO p q , respectively. J r p r q “ I ´ ´ cos || r |||| r || r r s ˆ ` || r || ´ sin p|| r ||q|| r || r r s ˆ J ´ r p r q “ I ` r r s ˆ ` ˆ || r || ´ ` cos p|| r ||q || r || sin p|| r ||q ˙ r r s ˆ B. Computation of F δ x and F w Combing (4) and (6) , we have: δ ˆ x i ` “ x i ` a ˆ x i ` “ p x i ‘ p ∆ t ¨ f p x i , u i , w i qqq a p ˆ x i ‘ p ∆ t ¨ f p ˆ x i , u i , qqq “ »————————————– Log ˆ´ G ˆ R Ii Exp p ˆ ω i ∆ t q ¯ T ¨ ´ G ˆ R Ii Exp ´ G δ r Ii ¯ Exp p ω i ∆ t q ¯˙ G δ p Ii ` G δ v i ∆ t ` a i ∆ t ´
12 ˆ a i ∆ t I δ r CiI δ p CiG δ v i ` ´ G ˆ R Ii Exp ´ G δ r Ii ¯¯ a i ∆ t ´ G ˆ R Ii ˆ a i ∆ tδ b gi ` n bg i δ a gi ` n ba i fiffiffiffiffiffiffiffiffiffiffiffiffifl with: ˆ ω i “ ω m i ´ b g i , ω i “ ˆ ω i ´ δ b g i ´ n g i (S1) ˆ a i “ a m i ´ b a i , a i “ ˆ a i ´ δ b a i ´ n a i (S2)And we have the following simplification and approxima-tion form Section. A. Log ˆ´ G ˆ R I i Exp p ˆ ω i ∆ t q ¯ T ¨ ´ G ˆ R I i Exp ´ G δ r I i ¯ Exp p ω i ∆ t q ¯˙ “ Log ´ Exp p ˆ ω i ∆ t q T ¨ ´ Exp ´ G δ r I i ¯ ¨ Exp p ω i ∆ t q ¯¯ « Log ´ Exp p ˆ ω i ∆ t q T Exp ´ G δ r I i ¯ Exp p ˆ ω i ∆ t q ¨ Exp p´ J r p ˆ ω i ∆ t q p δ b g i ` n g i qqq« Exp p ˆ ω i ∆ t q ¨ G δ r I i ´ J r p ˆ ω i ∆ t q T δ b g i ´ J r p ˆ ω i ∆ t q T n g i ´ G R I i Exp ´ G δ r I i ¯¯ a i ∆ t « ´ G R I i ´ I ` r G δ r I i s ˆ ¯¯ p ˆ a i ´ δ b a i ´ n a i q ∆ t « G R I i ˆ a i ∆ t ´ G R I i δ b a i ∆ t ´ G R I i n a i ∆ t ´ G R I i r ˆ a i s ˆ G δ r I i To conclude, we have the computation of F δ x and F w asfollow: F δ ˆ x “ B p δ ˆ x i ` qB δ ˆ x i ˇˇˇˇ δ ˆ x i “ , w i “ “ »———————– Exp p´ ˆ ω i ∆ t q ´ J r p ˆ ω i ∆ t q T
00 I 0 0 I ∆ t ´ G ˆ R Ii r ˆ a i s ˆ ∆ t ´ G ˆ R Ii ∆ t fiffiffiffiffiffiffiffifl F w “ B p δ ˆ x i ` qB w i ˇˇˇˇ δ ˆ x i “ , w i “ “ »———————– ´ J r p ˆ ω i ∆ t q T ´ G ˆ R I i ∆ t ∆ t
00 0 0 I ∆ t fiffiffiffiffiffiffiffifl C. The computation of H Recalling (15), we have: H “ p ˇ x k ` ‘ δ ˇ x k ` q a ˆ x k ` B δ ˇ x k ` | δ ˇ x k ` “ “ »————– A 0 0 0 0 ˆ ˆ ˆ ˆ ˆ fiffiffiffiffifl with the ˆ matrix A “ J ´ r p Log p G ˆ R I k ` T G ˇ R I k ` qq and B “ J ´ r p Log p I ˆ R C k ` T I ˇ R C k ` qq . D. The computation of H lj Recalling (12) and (15), we have: r l p ˇ x k ` ‘ δ ˇ x k ` , L p j q “ u Tj ` G ˇ p I k ` ` G δ ˇ p I k ` ´ q j ` G ˇ R I k ` Exp p G ˇ δ r I k ` q p I R LL p j ` I p L q ˘ (S3)And with the small perturbation approximation, we get: G ˇ R I k ` Exp p G δ ˇ r I k ` q P a « G ˇ R I k ` ˆ I ` ” G δ ˇ r I k ` ı ˆ ˙ P a “ G ˇ R I k ` P a ´ G ˇ R I k ` r P a s ˆ G δ ˇ r I k ` (S4) where P a “ I R LL p j ` I p L . Combining (S3) and (S4)together we can obtain: H lj “ u Tj “ ´ G ˇ R I k ` r P a s ˆ I ˆ ˆ ‰ E. The computation of H cs and F P s Recalling (16), we have: C P s “ P C p ˇ x k ` , G P s q “ “ C P sx C P sy C P sz ‰ T where the function P C p ˇ x k ` , G P s q is: P C p ˇ x k ` , G P s q “ ` G ˇ R I k ` I ˇ R C k ` ˘ T G P s (S5) ´ ` I ˇ R C k ` ˘ T G ˇ p I k ` ´ I ˇ p C k ` (S6)From (20), we have: r c ´ ˇ x k ` , C p s , G P s ¯ “ C p s ´ π p C P s q π p C P s q “ „ f x C P sxC P sz ` c x f y C P syC P sz ` c y T (S7) where f x and f y are the focal length, c x and c y are theprincipal point offsets in image plane. For conveniently, we omit the p¨q| δ ˇ x ik ` “ in the followingderivation, and we have: H cs “ ´ B π p C P s qB C P s ¨ B P C p ˇ x k ` ‘ δ ˇ x k ` , G P s qB δ ˇ x k ` (S8) F P s “ ´ B π p C P s qB C P s ¨ B P C p ˇ x k ` , G P s qB G P s (S9)where: B π p C P s qB C P s “ C P sz »——– f x ´ f x C P sxC P sz f y ´ f y C P syC P sz fiffiffifl (S10) B P b p ˇ x k ` , G P s qB G P s “ ´ G ˇ R I k ` I ˇ R C ¯ T (S11) According to Section. A, we have the following approxima-tion of P C p ˇ x k ` ‘ δ ˇ x k ` , G P s q : P C p ˇ x k ` ‘ δ ˇ x k ` , G P s q“ ´ G ˇ R I k ` Exp ´ G δ ˇ r I k ` ¯ I ˇ R C k ` Exp ´ I δ ˇ r C k ` ¯¯ T G P s ´ I ˇ p C ´ I δ ˇ p C ´ ´ I ˇ R C Exp ´ I δ ˇ r C ¯¯ T ´ G ˇ p I k ` ` G δ ˇ p I k ` ¯ « P b p ˇ x ik ` , G P s q ` „´ G ˇ R I k ` I ˇ R C ¯ T G P s ˆ I δ ˇ r C ` ´ I ˇ R C ¯ T „´ G ˇ R I k ` ¯ T G P s ˆ G δ ˇ r I k ` ´ ´ I ˇ R C ¯ T G δ ˇ p I k ` ´ „´ I ˇ R C ¯ T G ˇ p I k ` ˆ I δ ˇ r C ´ I δ ˇ p C With this, we can derive: B P C p ˇ x k ` ‘ δ ˇ x k ` , G P s qB δ ˇ x k ` “ “ M A M B M C ´ I 0 ˆ ‰ (S12) M A “ ´ I ˇ R C ¯ T „´ G ˆ R I k ` ¯ T G P s ˆ M B “ ´ ´ I ˆ R C ¯ T M C “ „´ G ˆ R I k ` I ˆ R C ¯ T G P s ˆ ´ „´ I ˆ R C ¯ T G ˆ p iI k ` ˆ Substituting (S10), (S11) and (S12) into (S8) and (S9), wefinish the computation of H cs and F P ss