11 Embedding Manifold Structures into Kalman Filters
Dongjiao He , Wei Xu , Fu Zhang Abstract —Error-state Kalman filter is an elegant and effectivefiltering technique for robotic systems operating on manifolds.However, to implement an error-state Kalman filter for a certainrobotic system, one usually needs to derive each step of the filterby switching between the original and the error state, a tediousand prone-to-error process. To avoid such repetitive derivations,this paper proposes a generic symbolic representation for error-state Kalman filters on manifolds. Utilizing the (cid:1) \ (cid:12) operationsand further defining a ⊕ operation on the respective manifold, wepropose a canonical representation of the robotic system. Such acanonical form enables us to separate the manifold structuresfrom the system descriptions in each step of the Kalmanfilter, ultimately leading to a generic, symbolic, and manifold-embedding Kalman filter framework. A major advantage of theproposed manifold-embedding Kalman filter framework is thatusers need only to cast the system model into the canonical formwithout going through the cumbersome hand-derivation of theon-manifold Kalman filter. This is particularly useful when therobotic system is of high dimension (e.g., augmented internalstate, multiple extrinsic parameters, swarms). Furthermore, themanifold-embedding Kalman filter is implemented as a toolkit in C ++ packages with which an user needs only to define the system,and call the respective filter steps (e.g., propagation, update)according to the events (e.g., reception of input, reception ofmeasurement). The existing implementation supports full iteratedKalman filtering for systems on manifold S = R m × SO (3) ×· · ·× SO (3) × S × · · · × S or any of its sub-manifolds, and is extend-able to other types of manifold when necessary. The proposedsymbolic Kalman filter and the developed toolkit are verified byimplementing a tightly-coupled lidar-inertial navigation system.Results show that the developed toolkit leads to superior filteringperformances and computation efficiency comparable to hand-engineered counterparts. Finally, the toolkit is opened sourcedat https://github.com/hku-mars/IKFoM to assist practitioners toquickly deploy an on-manifold Kalman filter. I. I
NTRODUCTION
A popular robotic filtering technique is the error-stateextended Kalman filter (ESEKF), such as in attitude esti-mation [1]–[3], online extrinsic calibration [4, 5], GPS/IMUnavigation [6], visual inertial navigation [7]–[12] and lidar-inertial navigation [13]–[15]. The basic idea of ESEKF is torepeatedly parameterize the state trajectory x τ ∈ S by an errorstate trajectory δ x τ | k ∈ R n from the current state predict x τ | k : x τ = x τ | k (cid:1) δ x τ | k . Then a normal extended Kalmanfilter is performed on the error state trajectory δ x τ | k to updatethe error state, and adds the updated error state back to theoriginal state on manifolds. Since this error is small, minimalparameterization (e.g., rotation axis and Euler angle) can beemployed without concerning the singularity issue (see Fig. 1).In addition, compared to other techniques such as unscentedKalman filter (UKF), the efficiency of the extended Kalmanfilter is higher. With the superiority of accuracy, stability All authors are with Department of Mechanical Engineering, Universityof Hong Kong. { hdj65822, xuweii, fuzhang } @hku.hk Fig. 1. Illustration of the error state trajectory when S is a group withoperation · . The R n space is tangent of S space at the identity x . δ x τ | k isa minimal parameterization of the error state (cid:101) x τ | k = x − τ | k · x τ ∈ S . and efficiency, the ESEKF provides an elegant Kalman filterframework for nonlinear robotic systems.Despite these advantages, deploying an ESEKF for a certainrobotic system is usually more difficult than normal EKFs.Due to the lack of canonical representation of systems onmanifolds, existing ESEKFs are designed case by case, andusually require a user to fully understand its underlyingprinciple (e.g., switching between the original state and theerror state) and to manually derive each step (e.g., propagation,update, reset) from scratch for a customized system. Althoughthis may seem like a mere book-keeping issue but in practiceit tends to be particularly cumbersome and error-prone, espe-cially for systems of high dimension, such as robotic swarmsand systems with augmented internal states [16] or multipleextrinsic parameters [17]. Besides the system dimension, thedifficulty in hand-derivation also rapidly escalates when theerror-state is coupled with iteration (e.g., iterated error-stateKalman filter), which has recently found more applicationsin visual-inertial [11] and lidar-inertial navigation [14, 15]to mitigate the linearization error in extended Kalman filters[18, 19].In this paper, we address the above issues by embeddingthe manifold structures into the Kalman filter framework.Specifically, our contributions are as follows: 1) We proposea canonical and generic representation of robotic systems indiscrete time, i.e., x k +1 = x k ⊕ (∆ t f ( x k , w k )) ; 2) Basedon the canonical system representation, we show that themanifold-specific structures are well separated from the thesystem-specific descriptions in each step of a Kalman filter,enabling us to embed the manifold structures into the Kalmanfilter. We further derive a fully iterated, symbolic, and error-state Kalman filter termed as IKFoM on the canonical systemrepresentation; 3) We embed the manifold structures into the a r X i v : . [ c s . R O ] F e b derived iterated Kalman filter and develop an open source C ++package. Its main advantage is hiding all the Kalman filterderivations and manifold-specific operations, and leaving theuser to supply system-specific descriptions only and call therespective filter steps (e.g., propagation, update) in the runningtime; 4) We verify our formulation and implementations with atightly-coupled lidar-inetial navigation system and on variousreal-world datasets. II. R ELATED W ORK
Kalman filter and its variants are very effective techniquesfor robots state estimation. However, Kalman filter operatesin state space of Euclidean space R n while robotic systemsusually have their states on manifolds (e.g., rotation group SO (3) ). To overcome this discrepancy, one could use a pa-rameterization of the state with minimal dimensions [20]. Thisminimal parameterization unfortunately has singularities. Forexample, Euler angle representation of SO (3) has singularitiesat ± ◦ rotations along the second rotation axis, and theaxis-angle representation has singularities at ◦ of rotations[21]. Workarounds for this singularity exist and they eitheravoid these parts of the state space, as done in the ApolloLunar Module [22], or switch between alternative orderingsof the parameterization each of which exhibits singularities indifferent areas of the state space.Another approach to overcome the singularity is represent-ing the system states using redundant parameters (i.e., over-parameterization). For example, unit quaternion is often usedto represent rotations on SO (3) . Yet, the over-parameterizationshifts the problem from system representation to the filteringalgorithm: viewing the over-parameterized state as a normalvector in Euclidean space and applying the Kalman filter (orits variants) will make the propagated state no longer lie onthe manifold (i.e., unit quaternion q T q = 1 is violated).One ad-hoc way to ensure the propagated state stay on themanifold is normalization. Since the normalization imposesconstraints on the state, the propagated covariance should beadjusted in parallel. For example, a unit quaternion q T q = 1 leads to an error satisfying q T δ q = 0 , which means thatthe error is zero along the direction q and the correspondingcovariance should be adjusted to zero [23] too. The adjustedcovariance propagation is therefore singular. Although theKalman filter still works with this singular covariance as longas the innovation covariance remains positive definite, it isunknown if this phenomenon causes further problems, e.g., thezero-uncertainty direction could create overconfidence in otherdirections after a nonlinear update [24]. An alternative way tointerpret the normalization is viewing as the measurementof q T q , thus one more nonlinear measurement h ( q ) = q T q should be added to the system. The augmented measurementswill then update the covariance in the Kalman filter framework.This approach is somewhat equivalent to the first (viewing as the measurement of q T q is equivalent to viewing as themeasurement of q T δ q to the first order) and hence suffersfrom the same problem.A more elegant approach is transforming the original systemthat operates on a manifold to its equivalent error space (i.e., tangent space) which is defined as the difference between thegroundtruth state and its most recent prediction. Since thiserror is small when the Kalman filter converges, it can besafely parameterized by a minimal set of parameters (e.g.,axis-angle) without occurring singularity. Then a normal EKFis used to update the minimally-parameterized error state,which is finally added back to the original state on themanifold. Such an indirect way to update the state estimatehas different names, such as “error state” EKF (ESEKF) [6],indirect EKF [2], or multiplicative EKF [1]. ESEKF providesan elegant way to incorporate filtering techniques into systemson manifolds, and has been widely used in a variety of roboticapplications [1]–[10, 12, 13]. To better describe the relationbetween the original state on manifold and the error state, the (cid:1) \ (cid:12) operations are introduced in [25] and widely adopted byunscented Kalman filters [24, 26] and more recently iteratedKalman filters [11, 14, 15]. The (cid:1) \ (cid:12) operations have alsobeen widely used in manifold-based optimizations [27, 28]such as calibration [29], graph-SLAM [30] and parameteridentification [31].This paper focuses on deriving a generic and symbolicKalman filter framework for robotic systems naturally oper-ating on manifolds. We propose a canonical representation ofrobotic systems, based on which a fully iterated and symbolicKalman fitler framework is derived. For well-studied SpecialOrthogonal group SO (3) , our work eventually leads to nearlythe same Kalman filter as in [1]–[10, 12, 13] for a specificsystem (up to the discretization accuracy), but unifies all ofthem into one canonical form. Moreover, our work provides ageneral way to incorporate new manifolds structures that areless studied, such as the 2-sphere S for modeling the bearingvector of a visual landmark [11].The rest of the paper is organized as follows: Section IIIintroduces the (cid:1) \ (cid:12) and ⊕ operations. Section IV presentsthe canonical representation of robotic systems, based onwhich Section V derives a fully iterated and symbolic Kalmanfilter. Section VI implements the symbolic error-state iteratedKalman filter as a C ++ package. Experiment results arepresented in Section VII. Finally, Section VIII concludes thispaper. III. O PERATIONS ON MANIFOLDS A. (cid:1) \ (cid:12) and ⊕ operations Let S be a n -manifold the robot is operating on, then S islocally homeomorphic to R n , the tangent space. Let the localmap at x ∈ S be denoted as S ϕ x : S (cid:55)→ R n with inversemap S ϕ − x . Further assume that the map is centered at x (i.e., S ϕ x ( x ) = ). Referring to [25], we establish a bijective mapfrom a local neighborhood in S to R n via two operators (cid:1) S (“boxplus”) and (cid:12) S (“boxminus”): (cid:1) : S × R n → S x (cid:1) S u = S ϕ − x ( u ) (cid:12) : S × S → R n y (cid:12) S x = S ϕ x ( y ) (1) Fig. 2. Illustration of the (cid:1) S operation on manifold S . It can be shown that x (cid:1) S ( y (cid:12) S x ) = y and ( x (cid:1) S u ) (cid:12) S x = u , ∀ x , y ∈ S , u ∈ R n . The physical interpretation of y = x (cid:1) S u is adding a small perturbation u ∈ R n to x ∈ S ,as illustrated in Fig. 2. And the inverse operation u = y (cid:12) S x determines the perturbation u which yields y ∈ S when (cid:1) S -added to x . These two operators create a local, vectorized viewof the globally more complex structure of the manifold.In particular, when S is a Lie group (e.g., R n , SO (3) , SE (3) ), the local map S ϕ x ( · ) reduces to: x (cid:1) S u = x · Exp( u ) y (cid:12) S x = Log( x − · y ) (2)where · is the binary operation on S such that ( S , · ) forms aLie group, Exp ( · ) is the exponential function [32], and x − is the inverse of x that always exist for an element on Liegroups by definition.In addition to (cid:1) / (cid:12) , we define a binary operation ⊕ S : S × R l (cid:55)→ S that drives the state in S according to an inputin R l . In particular, when S is a Lie group (e.g., R n , SO (3) , SE (3) ) which is naturally driven by its Lie algebra by theexponential map, the binary operation ⊕ reduces to (cid:1) . x ⊕ S v = x (cid:1) S u = x · Exp( v ) (i . e ., l = n ) (3)For the sake of notation simplicity, in the following discus-sion, we drop the subscript S in operations (cid:1) , (cid:12) and ⊕ whenno ambiguity exists. B. Differentiations
In the Kalman filter that will be derived later in Section. V,the partial differentiation of ((( x (cid:1) u ) ⊕ v ) (cid:12) y ) with respectto u and v will be used, where x , y ∈ S , u ∈ R n and v ∈ R l .This can be obtained easily from the chain rule as follows: ∂ ((( x (cid:1) u ) ⊕ v ) (cid:12) y ) ∂ u = ∂ S ϕ y ( z ) ∂ z | z =( x (cid:1) u ) ⊕ v · ∂ ( z ⊕ v ) ∂ z | z = x (cid:1) u · ∂ S ϕ − x ( z ) ∂ z | z = u ∂ ((( x (cid:1) u ) ⊕ v ) (cid:12) y ) ∂ v = ∂ S ϕ y ( z ) ∂ z | z =( x (cid:1) u ) ⊕ v · ∂ (( x (cid:1) u ) ⊕ z ) ∂ z | z = v (4)For certain manifolds (e.g., SO (3) ), it is usually moreconvenient to compute the differentiations ∂ ((( x (cid:1) u ) ⊕ v ) (cid:12) y ) ∂ u and ∂ ((( x (cid:1) u ) ⊕ v ) (cid:12) y ) ∂ v directly instead of using the above chain rule. C. Important manifolds in practiceExample 1:
Euclidean space S = R n : x (cid:1) u = x + uy (cid:12) x = y − xx ⊕ v = x + v ∂ ((( x (cid:1) u ) ⊕ v ) (cid:12) y ) ∂ u = I n × n∂ ((( x (cid:1) u ) ⊕ v ) (cid:12) y ) ∂ v = I n × n (5) Example 2:
Special orthogonal group S = SO (3) : x (cid:1) u = x · Exp ( u ) y (cid:12) x = Log (cid:0) x − · y (cid:1) x ⊕ v = x · Exp ( v ) ∂ ((( x (cid:1) u ) ⊕ v ) (cid:12) y ) ∂ u = A ((( x (cid:1) u ) ⊕ v ) (cid:12) y ) − T Exp( − v ) A ( u ) T∂ ((( x (cid:1) u ) ⊕ v ) (cid:12) y ) ∂ v = A ((( x (cid:1) u ) ⊕ v ) (cid:12) y ) − T A ( v ) T (6)where Exp ( u ) = exp ( (cid:98) u (cid:99) ) A ( u ) = I + (cid:16) − cos( (cid:107) u (cid:107) ) (cid:107) u (cid:107) (cid:17) (cid:98) u (cid:99)(cid:107) u (cid:107) + (cid:16) − sin( (cid:107) u (cid:107) ) (cid:107) u (cid:107) (cid:17) (cid:98) u (cid:99) (cid:107) u (cid:107) A ( u ) − = I − (cid:98) u (cid:99) + (1 − α ( (cid:107) u (cid:107) )) (cid:98) u (cid:99) (cid:107) u (cid:107) α ( (cid:107) u (cid:107) ) = (cid:107) u (cid:107) cot (cid:16) (cid:107) u (cid:107) (cid:17) = (cid:107) u (cid:107) (cid:107) u (cid:107) / (cid:107) u (cid:107) / (7)The derivation of the above differentiation is shown inLemma 1 in Appendix A. And the notation (cid:98) u (cid:99) denotes theskew-symmetric matrix that maps the cross product of u ∈ R . Example 3:
Special Euclidean S = SE (3) : x (cid:1) u = x · Exp ( u ) y (cid:12) x = Log (cid:0) x − · y (cid:1) (8)where u = (cid:2) ρ T θ T (cid:3) T ∈ R , Exp( u ) = (cid:20) exp ( (cid:98) θ (cid:99) ) ρ (cid:21) .One difficulty with SE (3) is that its Jacobian has no closedform as shown in [33], hence SE (3) should be avoided byviewing it as a compound manifold S = SO (3) × R . Example 4: S = S ( r ) (cid:44) { x ∈ R |(cid:107) x (cid:107) = r, r > } . The 2-sphere manifold is usually used to describe vectorsof fixed length r , such as the gravity vector with knownmagnitude and the bearing vector of a visual feature [11].Referring to Fig. 3, one way to define x (cid:1) u is rotating x along an vector u ∈ R in the tangent plane, the result wouldstill remain on S ( r ) as required. Assume b , b are twoorthonormal basis in the tangent plane and recall the definitionof Exp( · ) in (7), we have x (cid:1) u (cid:44) Exp( (cid:2) b b (cid:3) u ) · x (9)In many practical robotic systems (see Section. IV), thestate on S ( r ) usually represents a direction that may undergocertain angular motion. Hence, a suitable choice for the binaryoperation ⊕ is a rotation of an angle-axis vector v ∈ R : ⊕ : S ( r ) × R → S ( r ) x ⊕ v = Exp( v ) x (10) Fig. 3. Illustration of the (cid:1) operation on the S space. Notice that b and b depend on x , and denote B ( x ) = (cid:2) b b (cid:3) ∈ R × . Then x (cid:1) u = Exp ( B ( x ) u ) · xy (cid:12) x = B ( x ) T (cid:16) θ (cid:98) x (cid:99) y (cid:107)(cid:98) x (cid:99) y (cid:107) (cid:17) , θ = atan2 (cid:0) (cid:107)(cid:98) x (cid:99) y (cid:107) , x T y (cid:1) x ⊕ v = Exp( v ) x ∂ ((( x (cid:1) u ) ⊕ v ) (cid:12) y ) ∂ u = N (( x (cid:1) u ) ⊕ v , y )Exp( v ) M ( x , u ) ∂ ((( x (cid:1) u ) ⊕ v ) (cid:12) y ) ∂ v = − N (( x (cid:1) u ) ⊕ v , y )Exp( v ) (cid:98) x (cid:1) u (cid:99) A ( v ) T (11)where the N ( x , y ) and M ( x , u ) are defined as: N ( x , y ) = ∂ ( x (cid:12) y ) ∂ x = B ( y ) T (cid:16) θ (cid:107)(cid:98) y (cid:99) x (cid:107) (cid:98) y (cid:99) + (cid:98) y (cid:99) x · P ( x , y ) (cid:17) M ( x , u ) = ∂ ( x (cid:1) u ) ∂ u = − Exp( B ( x ) u ) (cid:98) x (cid:99) A ( B ( x ) u ) T B ( x ) P ( x , y ) = r (cid:16) − y T x (cid:107)(cid:98) y (cid:99) x (cid:107) + r θ (cid:107)(cid:98) y (cid:99) x (cid:107) x T (cid:98) y (cid:99) − y T (cid:17) (12)where A ( · ) is defined in (7). Note that we have N ( y , y ) = r B ( y ) T (cid:98) y (cid:99) , ∀ y ∈ S .The above results do not specify the basis B ( x ) , which canbe made arbitrary as long as it forms an orthonormal basisin the tangent plane of x . For example, we could adopt themethod in [34] (see Fig. 4): rotate one of the three canonicalbasis e i , i = 1 , , to x (along the geodesics ) and the rest two e i after the rotation would be B ( x ) . To avoid the singularityin the rotation when x = − r e i , e i is instantaneously chosensuch that it has the largest distance to − x , i.e., i = arg max j x T e j , R i ( x ) = Exp (cid:16) (cid:98) e i (cid:99) x (cid:107)(cid:98) e i (cid:99) x (cid:107) atan2 (cid:0) (cid:107)(cid:98) e i (cid:99) x (cid:107) , e Ti x (cid:1)(cid:17) , B ( x ) = R i ( x ) (cid:2) e j e k (cid:3) . (13)where j = i + 1 , k = i + 2 but wrapped below 3. D. (cid:1) \ (cid:12) and ⊕ operations for compound manifolds Based on the principles of the Cartesian product of mani-folds, the (cid:1) \ (cid:12) and ⊕ on a compound manifold of two (andby induction arbitrary numbers of) sub-manifolds are definedas: (cid:20) x x (cid:21)(cid:124) (cid:123)(cid:122) (cid:125) x (cid:1) (cid:20) u u (cid:21)(cid:124) (cid:123)(cid:122) (cid:125) u = (cid:20) x (cid:1) u x (cid:1) u (cid:21) , (cid:20) x x (cid:21)(cid:124) (cid:123)(cid:122) (cid:125) x ⊕ (cid:20) v v (cid:21)(cid:124) (cid:123)(cid:122) (cid:125) v = (cid:20) x ⊕ v x ⊕ v (cid:21) . (14) Fig. 4. Method adopted in [34] to obtain the orthonormal basis in the tangentplane on the S space. As proved in Lemma 2 in Appendix B, the partial differen-tiation on the compound manifold is: ∂ ((( x (cid:1) u ) ⊕ v ) (cid:12) y ) ∂ u = (cid:34) ∂ ((( x (cid:1) u ) ⊕ v ) (cid:12) y ) ∂ u ∂ ((( x (cid:1) u ) ⊕ v ) (cid:12) y ) ∂ u (cid:35) ∂ ((( x (cid:1) u ) ⊕ v ) (cid:12) y ) ∂ v = (cid:34) ∂ ((( x (cid:1) u ) ⊕ v ) (cid:12) y ) ∂ v ∂ ((( x (cid:1) u ) ⊕ v ) (cid:12) y ) ∂ v (cid:35) (15)The (cid:1) \ (cid:12) and ⊕ operations and their derivatives on a compound manifold are extremely useful, enabling us to definethe (cid:1) \ (cid:12) and ⊕ operations and their derivatives for primitivemanifolds (e.g., SO (3) , R n , S ( r ) ) only and then extend thesedefinitions to more complicated compound manifolds .IV. C ANONICAL REPRESENTATION
Consider a robotic system in discrete time with samplingperiod ∆ t , we can cast it into the following canonical formby zero-order hold discretization: x k +1 = x k ⊕ S (∆ t f ( x k , u k , w k )) , x k ∈ S , z k = h ( x k , v k ) , z k ∈ M , w k ∼ N ( , Q k ) , v k ∼ N ( , R k ) . (16)where the measurement z k is assumed to be on the manifold M of dimension m . This is the case such as loosely-coupledvisual-inertial odometry or lidar-inertial odometry where themeasurements is a pose, an element in SE (3) . When com-pared to higher-order discretization methods (e.g., Runge-Kutta integration) used in prior work [8, 12], the zero-orderhold discretization is usually less accurate. However, suchdifference is negligible when the sampling period is small.In the following, we show how to cast different statecomponents into the canonical form in (16). Then with thecomposition property (14), the complete state equation can beobtained by concatenating all components. Example 1:
Vectors in Euclidean space (e.g., position andvelocity). Assume x ∈ R n subject to ˙ x = f ( x , u , w ) . Usingzero-order hold discretization, f ( x , u , w ) is assumed constantduring the sampling period ∆ t , hence x k +1 = x k + (∆ t f ( x k , u k , w k ))= x k ⊕ R n (∆ t f ( x k , u k , w k )) . (17) Example 2:
Attitude kinematics in a global reference frame(e.g., the earth-frame). Let x ∈ SO (3) be the body attituderelative to the global frame and G ω be the global angularvelocity which holds constant for one sampling period ∆ t ,then ˙ x = (cid:98) G ω (cid:99) · x = ⇒ x k +1 = Exp (∆ t G ω k ) · x k = x k · Exp (cid:0) ∆ t ( x Tk · G ω k ) (cid:1) = x k ⊕ SO (3) (cid:0) ∆ t f (cid:0) x k , G ω k (cid:1)(cid:1) , f (cid:0) x k , G ω k (cid:1) = x Tk · G ω k . (18) Example 3:
Attitude kinematics in body frame. Let x ∈ SO (3) be the body attitude relative to the global frame and B ω be the body angular velocity which holds constant for onesampling period ∆ t , then ˙ x = x · (cid:98) B ω (cid:99) = ⇒ x k +1 = x k · Exp (∆ t B ω k )= x k ⊕ SO (3) (cid:0) ∆ t f (cid:0) B ω k (cid:1)(cid:1) , f (cid:0) B ω k (cid:1) = B ω k . (19) Example 4:
Vectors of known magnitude (e.g., gravity) inthe global frame. Let x ∈ S ( g ) be the gravity vector in theglobal frame with known magnitude g . Then, ˙ x = = ⇒ x k +1 = x k = x k ⊕ S ( g ) (∆ t f ( x k )) , f ( x k ) = . (20) Example 5:
Vectors of known magnitude (e.g., gravity) inbody frame. Let x ∈ S ( g ) be the gravity vector in thebody frame and B ω be the body angular velocity which holdsconstant for one sampling period ∆ t . Then, ˙ x = −(cid:98) B ω (cid:99) x = ⇒ x k +1 = Exp ( − ∆ t B ω k ) x k = x k ⊕ S ( g ) (cid:0) ∆ t f (cid:0) B ω k (cid:1)(cid:1) , f (cid:0) B ω k (cid:1) = − B ω k . (21) Example 6:
Bearing-distance parameterization of visuallandmarks [11]. Let x ∈ S (1) and d ( ρ ) ∈ R be the bearingvector and depth (with parameter ρ ), respectively, of a visuallandmark, and G R C , G p C be the attitude and position ofthe camera. Then the visual landmark in the global frame is G R C ( x d ( ρ )) + G p C , which is constant over time: d ( G R C ( x d ( ρ ))+ G p C ) dt = = ⇒(cid:98) C ω (cid:99) ( x d ( ρ )) + ˙ x d ( ρ ) + x d (cid:48) ( ρ ) ˙ ρ + C v = . (22)Left multiplying (22) by x T and using x T ˙ x = 0 yield ˙ ρ = − x T · C v /d (cid:48) ( ρ ) . Substituting this to (22) leads to ˙ x = −(cid:98) C ω + d ( ρ ) (cid:98) x (cid:99) · C v (cid:99) · x = ⇒ x k +1 = Exp (cid:16) − ∆ t (cid:16) C ω k + d ( ρ ) (cid:98) x k (cid:99) · C v k (cid:17)(cid:17) x k = x k ⊕ S (1) (cid:0) ∆ t f (cid:0) x k , C ω k , C v k (cid:1)(cid:1) , f (cid:0) x k , C ω k , C v k (cid:1) = − C ω k − d ( ρ ) (cid:98) x k (cid:99) · C v k . (23)where C ω + d ( ρ ) (cid:98) x (cid:99) · C v is assumed constant for one samplingperiod ∆ t due to the zero-order hold assumption.V. E RROR -S TATE K ALMAN F ILTERS ON M ANIFOLDS
In this chapter, we derive a symbolic Kalman filter based onthe canonical system representation (16). To avoid singularityof the minimal parameterization of the system original statewhich lies on manifolds, we employ the error-state ideathat has been previously studied in prior work such as [6]
Fig. 5. The covariance matrix ( P k | k ) of the error state ( δ x k | k ). and [16]. The presented derivation is very abstract, althoughbeing more concise, compact and generic. Moreover, for acomplete treatment, we derive the full multi-rate iteratedKalman filter. Readers may refer to [6] for more detailedderivations/explanations or [16] for a brief derivation on aconcrete example.In the following presentations, we use the below notations:(i) S denotes the manifold that the state x lies on. And M denotes the manifold that the measurement z lies on. Forsake of notation simplification, we drop the subscripts S , M for (cid:1) and (cid:12) when the context is made clear.(ii) The subscript k denotes the time index, e.g., x k is theground truth of the state x at step k .(iii) The subscript τ | k denotes the estimation of a quantityat step τ based on all the measurements up to step k ,e.g., x τ | k means the estimation of state x τ based onmeasurements up to step k . For filtering problem, itrequires τ ≥ k . More specifically, we have τ > k forstate predict (i.e., prior estimate) and τ = k for stateupdate (i.e., posteriori estimate).(iv) δ x τ | k = x τ (cid:12) x τ | k denotes the estimation error in thetangent space of x τ | k . It is a random vector in R n sincethe ground true state x is random.(v) P τ | k denotes the covariance of the estimation error δ x τ | k .(vi) superscript j denotes the j -th iteration of the iteratedKalman filter, e.g. x jk | k denotes the estimate of state x k at the j -th iteration based on measurements up to step k . A. Initialization
Assume we have received measurements up to step k andupdated the state at that time step as x k | k along with theupdated covariance matrix P k | k . According to the notationconventions above, P k | k denotes the covariance of δ x k | k , anerror in the tangent space of the state update x k | k . The relationbetween δ x k | k and P k | k is shown in Fig. 5. B. State propagation
The state propagation from step k follows directly from thesystem model in equation (16) by setting w = : x τ +1 | k = x τ | k ⊕ (cid:0) ∆ t f (cid:0) x τ | k , u τ | k , (cid:1)(cid:1) ; τ ≥ k (24)If only one step needs to be propagated, which is usually thecase for measurements being the same sampling rate as thatof the input, then τ = k . Otherwise, the propagation proceedsat each input and stops when a measurement comes. C. The error-state system
The error-state Kalman filter propagates the covariancematrix in the error state in order to avoid the the over-parameterization in x . The error state is defined for τ ≥ k as follows δ x τ | k = x τ (cid:12) x τ | k , τ ≥ k. (25)Substituting (16) and (24) into (25) leads to δ x τ +1 | k = x τ +1 (cid:12) x τ +1 | k = ( x τ ⊕ (∆ t f ( x τ , u τ , w τ ))) (cid:12) (cid:0) x τ | k ⊕ (cid:0) ∆ t f (cid:0) x τ | k , u τ , (cid:1)(cid:1)(cid:1) . (26)Then substituting (25) into the above equation leads to δ x τ +1 | k = (cid:0)(cid:0) x τ | k (cid:1) δ x τ | k (cid:1) ⊕ (cid:0) ∆ t f (cid:0) x τ | k (cid:1) δ x τ | k , u τ , w τ (cid:1)(cid:1)(cid:1) (cid:12) (cid:0) x τ | k ⊕ (cid:0) ∆ t f (cid:0) x τ | k , u τ , (cid:1)(cid:1)(cid:1) , (27)which defines a new system starting from τ = k . This systemdescribes the time evolvement of error state δ x τ | k and henceis referred to as the error-state system. Since the new error-state system originates from the current measurement time k ,it is re-defined once a new measurement is received to updatethe state estimate. Such a repeating process effectively restrictsthe error trajectory within a neighbor of the identity, validatingthe minimal parameterization in δ x τ | k . In case S is a Liegroup, the error state in tangent space of x τ | k is δ x τ | k = Log (cid:16) x − τ | k · x τ (cid:17) . Define (cid:101) x τ | k = x − τ · x τ the error state on theoriginal manifold S , the relation between the two trajectories δ x τ | k and (cid:101) x τ | k is shown in Fig. 1.Since the error system (27) has minimal parameterization,the standard Kalman filter variants could be employed. Ac-cordingly, the two Kalman filter steps, propagation and update,are referred to as “error-state propagation” and “error-stateupdate”, respectively, in order to distinguish from the originalstate space (16). In the following, we show in detail the error-state propagation and error-state update.
1) Initial condition:
The error system (27) starts from τ = k . The initial estimation is δ x ( k | k ) | k = (cid:0) x k (cid:12) x k | k (cid:1) | k = x k | k (cid:12) x k | k = (28)Here, the notation δ x ( k | k ) | k denotes the estimation of therandom vector δ x k | k (recall that this is indeed random dueto its definition in (25) and that the ground truth state x k israndom) based on measurements up to k . The result in (28) isnot surprising as δ x k | k is the error after conditioning on themeasurements (up to k ) already, so conditioning on the samemeasurements again does not give more information. Fig. 6. Propagation of the state (red arrows on the manifold) and its covariance(green arrows on the tangent planes).
2) Error state propagation:
The error state propagationfollows directly from the error-state system model in (27) bysetting w = : δ x ( τ +1 | k ) | k = (cid:0)(cid:0) x τ | k (cid:1) δ x ( τ | k ) | k (cid:1) ⊕ (cid:0) ∆ t f (cid:0) x τ | k (cid:1) δ x ( τ | k ) | k , u τ , (cid:1)(cid:1)(cid:1) (cid:12) (cid:0) x τ | k ⊕ (cid:0) ∆ t f (cid:0) x τ | k , u τ , (cid:1)(cid:1)(cid:1) ; τ ≥ k (29)Starting from the initial condition in (28), we obtain δ x ( τ | k ) | k = ; ∀ τ ≥ k. (30)Next, to propagate the error covariance, we need to linearizethe system (27) as follows δ x τ +1 | k ≈ F x τ δ x τ | k + F w τ w τ (31)where F x τ is the partial differention of (27) w.r.t δ x τ | k atpoint δ x ( τ | k ) | k = , as follows F x τ = ∂ (( x τ | k (cid:1) δ x τ | k ) ⊕ ( ∆ t f ( x τ | k , u τ , ))) (cid:12) ( x τ | k ⊕ ( ∆ t f ( x τ | k , u τ , ))) ∂δ x τ | k + ∂ ( x τ | k ⊕ ( ∆ t f ( x τ | k (cid:1) δ x τ | k , u τ , ))) (cid:12) ( x τ | k ⊕ ( ∆ t f ( x τ | k , u τ , ))) ∂δ x τ | k = G x τ + ∆ t G f τ ∂ f ( x τ | k (cid:1) δ x , u τ , ) ∂δ x | δ x = (32)and F w τ is the partial differentiation of (27) w.r.t w τ at thepoint w τ = , as follows F w τ = ∂ ( x τ | k ⊕ ( ∆ t f ( x τ | k , u τ , w τ ))) (cid:12) ( x τ | k ⊕ ( ∆ t f ( x τ | k , u τ , ))) ∂ w τ = ∆ t G f τ ∂ f ( x τ | k , u τ , w ) ∂ w | w = (33)where G x τ = ∂ ((( x (cid:1) u ) ⊕ v ) (cid:12) y ) ∂ u (cid:12)(cid:12)(cid:12) x = x τ | k ; u = ; v =∆ t f ( x τ | k , u τ , ) ; y = x τ | k ⊕ ∆ t f ( x τ | k , u τ , ) G f τ = ∂ ((( x (cid:1) u ) ⊕ v ) (cid:12) y ) ∂ v (cid:12)(cid:12)(cid:12) x = x τ | k ; u = ; v =∆ t f ( x τ | k , u τ , ) ; y = x τ | k ⊕ ∆ t f ( x τ | k , u τ , ) (34)Finally, the covariance is propagated as P τ +1 | k = F x τ P τ | k F T x τ + F w τ Q τ F T w τ (35)The propagation of the state in (24) and respective covari-ance in (35) are illustrated in Fig. 6.
3) Isolation of manifold structures:
As shown by (32)and (33), the two system matrices F x τ , F w τ are well sep-arated into manifold-specific parts G x τ , G f τ and system-specific parts ∂ f ( x τ | k (cid:1) δ x , u τ , ) ∂δ x | δ x = , ∂ f ( x τ | k , u τ , w ) ∂ w | w = . Themanifold-specific parts for commonly used manifolds arelisted in TABLE I. Moreover, based on (15), the manifold-specific parts for any compound manifold is the concatenationof that of these primitive manifolds. TABLE IM
ANIFOLD - SPECIFIC PARTS FOR G x τ , G f τ S G x τ R n I n × n SO (3) Exp (cid:0) − ∆ t f (cid:0) x τ | k , u τ , (cid:1)(cid:1) S ( r ) − r B (cid:0) x τ +1 | k (cid:1) T Exp (cid:0) ∆ t f (cid:0) x τ | k , u τ , (cid:1)(cid:1) ·(cid:98) x τ | k (cid:99) B (cid:0) x τ | k (cid:1) S G f τ R n I n × n SO (3) A (cid:0) ∆ t f (cid:0) x τ | k , u τ , (cid:1)(cid:1) T S ( r ) − r B (cid:0) x τ +1 | k (cid:1) T Exp (cid:0) ∆ t f (cid:0) x τ | k , u τ , (cid:1)(cid:1) ·(cid:98) x τ | k (cid:99) A (cid:0) ∆ t f (cid:0) x τ | k , u τ , (cid:1)(cid:1) T D. State update1) Prior distribution:
Assume a measurement arrives atstep τ > k . Without the loss of generality, we assume τ = k + 1 , i.e., the measurement rate is equal to the inputrate. The propagated error state δ x k +1 | k and its covariance P k +1 | k create a prior distribution for x k +1 : δ x k +1 | k = x k +1 (cid:12) x k +1 | k ∼ N (cid:0) , P k +1 | k (cid:1) (36)
2) Iterated update:
Now assume the new measurement at k + 1 is z k +1 . In the j -th iteration, the state estimate is x jk +1 | k +1 , where x jk +1 | k +1 = x k +1 | k (i.e., the priori estimate)for j = 0 , then define the residual r jk +1 (cid:44) z k +1 (cid:12) h ( x jk +1 | k +1 , )= h ( x k +1 , v k +1 ) (cid:12) h ( x jk +1 | k +1 , )= h ( x jk +1 | k +1 (cid:1) δ x j , v k +1 ) (cid:12) h ( x jk +1 | k +1 , ) ≈ D jk +1 v k +1 + H jk +1 δ x j (37)where δ x j (cid:44) x k +1 (cid:12) x jk +1 | k +1 is the error between the groundtrue state x k +1 and its most recent estimate x jk +1 | k +1 , and H jk +1 = ∂ (cid:16) h ( x jk +1 | k +1 (cid:1) δ x , ) (cid:12) h ( x jk +1 | k +1 , ) (cid:17) ∂δ x | δ x = = ∂ h ( x jk +1 | k +1 (cid:1) δ x , ) ∂δ x | δ x = , for M = R m , D jk +1 = ∂ (cid:16) h ( x jk +1 | k +1 , v ) (cid:12) h ( x jk +1 | k +1 , ) (cid:17) ∂ v | v = = ∂ h ( x jk +1 | k +1 , v ) ∂ v | v = , for M = R m (38)Equation (37) defines a posteriori distribution for δ x j ( D jk +1 v k +1 ) | δ x j = r jk +1 − H jk +1 δ x j ∼ N (cid:0) , ¯ R k +1 (cid:1) ;¯ R k +1 = D jk +1 R k +1 ( D jk +1 ) T (39) Fig. 7. Prior distribution N ( , P k +1 | k ) , its projection N ( µ , Σ ) , andposteriori distribution N ( µ , Σ ) , where µ = − J jk +1 ( x jk +1 | k +1 (cid:12) x k +1 | k ) , Σ = J jk +1 P k +1 | k ( J jk +1 ) T and µ = ( H jk +1 ) − r jk +1 , Σ = ( H jk +1 ) − ¯ R k +1 ( H jk +1 ) − T . On the other hand, (36) defines a distribution for the priorestimation error δ x k +1 | k ∼ N (cid:0) , P k +1 | k (cid:1) , which is in thetangent space of x k +1 | k . As shown in Fig. 7, projecting δ x k +1 | k to the tangent space of x jk +1 | k +1 leads to δ x k +1 | k = x k +1 (cid:12) x k +1 | k = ( x jk +1 | k +1 (cid:1) δ x j ) (cid:12) x k +1 | k = ( x jk +1 | k +1 (cid:12) x k +1 | k ) + ( J jk +1 ) − δ x j (40)where J jk +1 = ∂ ((( x (cid:1) u ) ⊕ v ) (cid:12) y ) ∂ u (cid:12)(cid:12)(cid:12) x = x k +1 | k , u = x jk +1 | k +1 (cid:12) x k +1 | k , v = , y = x jk +1 | k +1 (41)is the inverse Jacobian of δ x k +1 | k with repect to (w.r.t.) δ x j evaluated at zero. Then, the equivalent prior distribution for δ x j is δ x j ∼ N ( − J jk +1 ( x jk +1 | k +1 (cid:12) x k +1 | k ) , J jk +1 P k +1 | k ( J jk +1 ) T ) (42)Combing the prior distribution (42) and posteriori distribu-tion (39) leads to the maximum a-posteriori estimate (MAP)of δ x j (see Fig. 7): arg max δ x j log (cid:16) N ( δ x j ) N (cid:16) ( D jk +1 v k +1 ) | δ x j (cid:17)(cid:17) = arg min δ x j g ( δ x j ) ; g ( δ x j ) = (cid:107) r jk +1 − H jk +1 δ x j (cid:107) R − k +1 + (cid:107) ( x jk +1 | k +1 (cid:12) x k +1 | k ) + ( J jk +1 ) − δ x j (cid:107) P − k +1 | k (43)where (cid:107) x (cid:107) A = (cid:107) Ax (cid:107) = x T A T Ax . The optimal solution δ x o for (43) leads to the Kalman update [35] as below: δ x oj = − J jk +1 ( x jk +1 | k +1 (cid:12) x k +1 | k )+ K jk +1 ( r jk +1 + H jk +1 J jk +1 ( x jk +1 | k +1 (cid:12) x k +1 | k )) K jk +1 = (cid:0) Q jk +1 (cid:1) − ( H jk +1 ) T ¯ R − k +1 = J jk +1 P k +1 | k ( J jk +1 ) T ( H jk +1 ) T ( S jk +1 ) − Q jk +1 = ( H jk +1 ) T ¯ R − k +1 H jk +1 +( J jk +1 ) − T P − k +1 | k (cid:0) J jk +1 (cid:1) − S jk +1 = H jk +1 J jk +1 P k +1 | k ( J jk +1 ) T ( H jk +1 ) T + ¯ R k +1 (44) where Q jk +1 is the Hessian matrix of (43) and its inverse rep-resents the covariance of δ x j − δ x oj , which can be furthermorewritten into the form below [35] P jk +1 = ( Q jk +1 ) − = ( I − K jk +1 H jk +1 ) J jk +1 P k +1 | k ( J jk +1 ) T (45)With the optimal δ x oj , the update of x k +1 estimate is then x j +1 k +1 | k +1 = x jk +1 | k +1 (cid:1) δ x oj (46)The above process iterates until convergence or exceedingthe maximum steps.
3) Covariance reset:
Assume the iterated update stops after κ ≥ iterations,resulting in a MAP estimate x κ +1 k +1 | k +1 and covariance matrix P κk +1 . Then x κ +1 k +1 | k +1 becomes the Kalman update of x k +1 x k +1 | k +1 = x κ +1 k +1 | k +1 (47)which is passed to the next step of the Kalman filter. Forthe P κk +1 , note that it describes the covariance of δ x κ − δ x oκ which is in the tangent space of x κk +1 | k +1 , while what requiredat the next step of the Kalman filter should be the covariance P k +1 | k +1 describing error δ x k +1 | k +1 that is in the tangentspace of x k +1 | k +1 (see Section V-A). This discrepancy neces-sitates a projection step as shown in Fig. 8. According to thedefinition of the error state in (25), we have δ x k +1 | k +1 = x k +1 (cid:12) x k +1 | k +1 = x k +1 (cid:12) x κ +1 k +1 | k +1 δ x κ = x k +1 (cid:12) x κk +1 | k +1 (48)which leads to δ x k +1 | k +1 = ( x κk +1 | k +1 (cid:1) δ x κ ) (cid:12) x κ +1 k +1 | k +1 = L k +1 ( δ x κ − δ x oκ ) (49)where L k +1 = ∂ ((( x (cid:1) u ) ⊕ v ) (cid:12) y ) ∂ u (cid:12)(cid:12)(cid:12) x = x κk +1 | k +1 , u = δ x oκ , v = , y = x κ +1 k +1 | k +1 (50)is the Jacobian of δ x k +1 | k +1 w.r.t. δ x κ evaluated at δ x oκ .Finally, the covariance for δ x k +1 | k +1 is P k +1 | k +1 = L k +1 P κk +1 L Tk +1 (51) Fig. 8. Reset of covariance. The red and green R n spaces are tangent withthe S space at points x κk +1 | k +1 and x κ +1 k +1 | k +1 respectively.
4) Isolation of manifold structures:
Notice that the twomatrices J jk +1 and L k +1 required in the Kalman upudateonly depend on the manifold S thus being manifold-specificmatrices. Their values for commonly used manifolds aresummarized in TABLE II. Again, the manifold-specific partsfor any compound manifolds are the concatenation of theseprimitive manifolds. In particular, for an extended Kalmanfilter (i.e., κ = 0 ), J κk +1 = I while L k +1 (cid:54) = I ; for a fullyconverged iterated Kalman filter (i.e., κ is sufficiently large), J κk +1 (cid:54) = I while L k +1 = I . TABLE IIM
ANIFOLD - SPECIFIC PARTS FOR J jk +1 , L k +1 S J jk +1 R n I n × n SO (3) A (cid:16) δ x jk +1 | k +1 (cid:17) T S ( r ) − r B (cid:16) x jk +1 | k +1 (cid:17) T Exp (cid:16) B (cid:0) x k +1 | k (cid:1) δ x jk +1 | k +1 (cid:17) ·(cid:98) x k +1 | k (cid:99) A (cid:16) B (cid:0) x k +1 | k (cid:1) δ x jk +1 | k +1 (cid:17) T B (cid:0) x k +1 | k (cid:1) S L k +1 R n I n × n SO (3) A ( δ x oκ ) T S ( r ) − r B (cid:16) x κ +1 k +1 | k +1 (cid:17) T Exp (cid:16) B (cid:16) x κk +1 | k +1 (cid:17) δ x oκ (cid:17) ·(cid:98) x κk +1 | k +1 (cid:99) A (cid:16) B (cid:16) x κk +1 | k +1 (cid:17) δ x oκ (cid:17) T B (cid:16) x κk +1 | k +1 (cid:17) δ x jk +1 | k +1 = x jk +1 | k +1 (cid:12) x k +1 | k . E. Error-state iterated Kalman filter on Manifolds
Summarizing all the above procedures in Section (V-A, V-B,V-C, V-D) leads to the full error-state iterated Kalman filteroperating on manifolds (see Algorithm 1). Setting the number of iteration N max to zero leads to the error-state extendedKalman filter used in [6, 16]. Algorithm 1: Iterated error-state Kalman filter on manifoldsInput: x k | k , P k | k , u k , z k +1 Output: State update x k +1 | k +1 and covariance P k +1 | k +1 Prediction: x k +1 | k = x k | k ⊕ (cid:0) ∆ t f (cid:0) x k | k , u k , (cid:1)(cid:1) ; P k +1 | k = F x k P k | k F T x k + F w k Q k F T w k ;Update: j = − x k +1 | k +1 = x k +1 | k ; while Not Converged and j ≤ N max − do j = j + 1 ;Calculate r jk +1 , D jk +1 , H jk +1 as in (37) and (38);Calculate J jk +1 as in (41);Calculate K jk +1 and δ x oj as in (44); x j +1 k +1 | k +1 = x jk +1 | k +1 (cid:1) δ x oj ; end while P jk +1 =( I − K jk +1 H jk +1 ) J jk +1 P k +1 | k ( J jk +1 ) T ; x k +1 | k +1 = x j +1 k +1 | k +1 ; P k +1 | k +1 = L k +1 P jk +1 L Tk +1 ; VI. E
MBEDDING M ANIFOLD S TRUCTURES INTO K ALMAN F ILTERS AND T OOLKIT D EVELOPMENT
Shown in Section V, the derived Kalman filter is formu-lated in symbolic representations and it is seen that eachstep of the Kalman filter is nicely separated into manifold-specific parts and system-specific parts. More specifically, statepropagation (24) breaks into the manifold-specific operation ⊕ and system-specific part ∆ t f ( x , u , w ) , the two matrices F x and F w used in the covariance propagation (35) breaksinto the manifold-specific parts G x , G f and system-specificparts ∂ f ( x (cid:1) δ x , u , ) ∂δ x | δ x = , ∂ f ( x , u , w ) ∂ w | w = , the state update (44)breaks into the manifold-specific operation (cid:1) , manifold-specific part J jk +1 and system-specific parts, i.e., h ( x , v ) , ∂ ( h ( x (cid:1) δ x , ) (cid:12) h ( x , )) ∂δ x | δ x = , and ∂ ( h ( x , v ) (cid:12) h ( x , )) ∂ v | v = . And co-variance reset only involves the manifold-specific part L k +1 .Note that these system-specific descriptions are often easy toderive even for robotic systems of high dimension (see SectionVII).The nice separation property between the manifold-specificparts and system-specific descriptions allows the embeddingof the manifold structures into the Kalman filter framework,and only leaves system-specific parts to be filled for specificsystems. Moreover, enabled by the manifold composition in(14) and (15), we only need to do so for simple primitivemanifolds while those for larger compound manifolds can beautomatically constructed. These two properties enabled us todevelop a C ++ toolkit that encapsulates the manifold-specificoperations with a Kalman filter. With this toolkit, users needonly to specify the manifold of state S , measurement M ,and system-specific descriptions (i.e., function f , h and theirderivatives), and call the respective Kalman filter operations(i.e., propagation and update) according to the current event(e.g., reception of an input or a measurement).The current toolkit implementation is a full multi-rateiterated Kalman filter naturally operating on manifolds and Fig. 9. Configuration of the lidar-inertial system from [15]: A small scale( mm wheelbase) quadrotor UAV carrying a Livox AVIA lidar and a DJIManifold 2C computer. The onboard camera is for visualization only. is thus termed as IKFoM . Furthermore, it supports three primitive manifolds : R n , SO (3) and S ( r ) , but extendableto other types of primitive manifolds with proper definitionof the operation (cid:1) , ⊕ , and differentiations ∂ ((( x (cid:1) u ) ⊕ v ) (cid:12) y ) ∂ u , ∂ ((( x (cid:1) u ) ⊕ v ) (cid:12) y ) ∂ v . The toolkit is open sourced and more detailsabout the implementation can be found at https://github.com/hku-mars/IKFoM. VII. E XPERIMENTS
In this section, we apply our developed Kalman filterframework and toolkit implementations to a tightly-coupledlidar-inertial navigation system taken from [15]. The overallsystem, shown in Fig. 9, consists of a solid-state lidar (LivoxAVIA) with a built-in IMU and an onboard computer. The lidarprovides a Hz scan rate and Hz gyro and accelerom-eter measurements. Unlike conventional spinning lidars (e.g.,Velodyne lidars), the Livox AVIA has only 70 ◦ Field of View(FoV), making the lidar-inertial odometry rather challenging.The onboard computer is configured with a . GHz quad-coreIntel i7-8550U CPU and GB RAM. Besides the original stateestimation problem considered in [15], we further consider theonline estimation of the extrinsic between the lidar and IMU.
A. System modeling
The global frame is denoted as G (i.e. the initial frame ofthe IMU), the IMU frame is taken as the body frame (denotedas I ), and the lidar frame is denoted as L . Assuming the lidaris rigidly attached to the IMU with an unknown extrinsic I T L = (cid:0) I R L , I p L (cid:1) , the objective of this system is to 1)estimate kinematics states of the IMU including its position( G p I ), velocity ( G v I ), and rotation ( G R I ∈ SO (3) ) in theglobal frame; 2) estimate the biases of the IMU (i.e., b a and b ω ; 3) estimate the gravity vector ( G g ) in the global frame;4) estimate the extrinsic I T L = (cid:0) I R L , I p L (cid:1) online; and 5)build a global point cloud map of the observed environment.Augmenting the state formulation in [15] with the lidar-IMUextrinsic, we have: G ˙ p I = G v I , G ˙ v I = G R I ( a m − b a − n a ) + G g G ˙ R I = G R I (cid:98) ω m − b ω − n ω (cid:99) , ˙ b ω = n b ω , ˙ b a = n b a G ˙ g = , I ˙ R L = , I ˙ p L = (52)where a m , ω m are the IMU measurements, n a , n ω are IMUnoises, n b ω and n b a are zero mean Gaussian white noisesthat drive the IMU biases b ω and b a respectively. The gravityvector G g is of fixed length . m/s .The measurement model is identical to [15]: for a new scanof lidar raw points, we extract the plane and edge points (i.e.,feature points) based on the local curvature [36]. Then for ameasured feature point L p f i , i = 1 , ..., m , its true location inthe global frame should lie on the corresponding plane (oredge) in the map built so far. More specifically, we representthe corresponding plane (or edge) in the map by its normaldirection (or direction of the edge) u i and a point G q i lyingon the plane (or edge). Since the point L p f i , i = 1 , ..., m is measured in the lidar local frame (thus denoted as L ) andcontaminated by measurement noise n i , the true point locationin the global frame is G T I I T L (cid:0) L p f i − n i (cid:1) . Since this truelocation lies on the plane (or edge) defined by u i and G q i , itsdistance to the plane (or edge) should be zero, i.e., G i (cid:0) G T I I T L (cid:0) L p f i − n i (cid:1) − G q i (cid:1) = , i = 1 , · · · , m (53)where G i = u Ti for a planar feature and G i = (cid:98) u i (cid:99) for anedge feature. This equation defines an implicit measurementmodel which relates the measurement L p f i , measurementnoise n i , and the ground-truth state G T I and I T L .To obtain u i , G q i of the corresponding plane (or edge) inthe map, we use the state estimated at the current iterationto project the feature point L p f i to the global frame and findthe closest five feature points (of the same type) in the mapbuilt so far. After convergence of the iterated Kalman filter, theoptimal state update is used to project the feature point L p f i to the global frame and append it to the map. The updatedmap is finally used in the next to register new scans. B. Canonical representation:
Using the zero-order hold discretization described in SectionIV, the system with state model (52) and measurement model(53) can be discretized and cast into the canonical form asfollows: S = R × R × SO (3) × R × R × S × SO (3) × R , M = R × · · · × R × R × · · · × R (cid:124) (cid:123)(cid:122) (cid:125) m , x T = (cid:2) G p I G v I G R I b a b ω G g I R L I p L (cid:3) , u T = (cid:2) a m ω m (cid:3) f ( x , u , w ) T = (cid:2) G v I G R I ( a m − b a − n a )+ G g ω m − b ω − n ω n b a n b ω (cid:3) , h i ( x , v ) T = G i (cid:0) G T I I T L (cid:0) L p f i − n i (cid:1) − G q i (cid:1) , w T = (cid:2) n a n ω n b a n b ω (cid:3) , v T = (cid:2) · · · n i · · · (cid:3) , i = 1 , ..., m. (54) with equivalent measurement z being constantly zero.The system-specific partial differentions are therefore: par-tial differentiations for f ( x , u , w ) : ∂ f ( x (cid:1) δ x , u , ) ∂δ x (cid:12)(cid:12)(cid:12) δ x = = F − G R I F − I 0 0 00 0 0 0 0 0 0 00 0 0 0 0 0 0 00 0 0 0 0 0 0 00 0 0 0 0 0 0 00 0 0 0 0 0 0 0 ∂ f ( x , u , w ) ∂ w (cid:12)(cid:12)(cid:12) w = = − G R I − I 0 00 0 I 00 0 0 I0 0 0 00 0 0 00 0 0 0 (55)where U F = − G R I (cid:98) a m − b a (cid:99) and U F = −(cid:98) G g (cid:99) B ( G g ) , B ( · ) is defined in the equation (13). And partial differentia-tions for h ( x , v ) : ∂ ( h ( x (cid:1) δ x , ) (cid:12) h ( x , )) ∂δ x | δ x = = ... ... ... ... ... ... ... ... G i Hi Hi G iG R I ... ... ... ... ... ... ... ... , ∂ ( h ( x , v ) (cid:12) h ( x , )) ∂ v | v = = diag ( · · · , − G iG R I I R L , · · · ) (56)where U Hi = − G iG R I (cid:98) I T LL p f i (cid:99) , and U Hi = − G iG R I I R L (cid:98) L p f i (cid:99) .Supplying the canonical representation of the system (54)and the respective partial differentiations in (55) and (56) toour toolkit leads to a tightly-coupled lidar-inertial navigationsystem. C. Experiment results
We verify the tightly-coupled lidar-inertial navigation sys-tem implemented by our toolkit in three different scenarios,i.e., indoor UAV flight, indoor quick-shake experiment, andoutdoor random walk experiment. They are denoted as V1,V2, and V3 respectively. For each scenario, we test the im-plementation on two trials of data, one collected by ourselvesand the other from the original paper [15]. The six datasetsare denoted as V1-01, V1-02, V2-01, V2-02, V3-01, and V3-02, respectively. In all experiments, the maximal number ofiteration in the iterated Kalman filter (see Algorithm 1) is setto 4, i.e., N max = 4 .
1) Indoor UAV flight:
For the UAV fight experiment, weonly show the data collected in this work (i.e., V1-01). Theexperiment is conducted in an indoor environment (see Fig. 10(A)) where the UAV took off from the ground and flied in acircle path. During the path following, the UAV is constantlyfacing at a cluttered office area behind a safety net (see Fig. Fig. 10. Real-time mapping result of dataset V1-01, an indoor UAV flightexperiment of lidar-inertial navigation system. A: Photo from ground view;B: Snapshot of onboard FPV video; C: Map result, (C1) Trajectory and posesof the UAV at beginning and end of the experiment.Fig. 11. Estimation of the position of the UAV for dataset V1-01, an indoorUAV flight experiment of lidar-inertial navigation system.
10 (B)). After the path following, a human pilot took overthe UAV and landed it manually to ensure that the landingposition coincides with the take-off point.Fig. 10 (C) shows the real-time mapping results overlaidwith the 3D trajectory estimated by our system. It can beseen that our system achieves consistent mapping even in thecluttered indoor environment. The position drift is less than . (i.e., . m drift over the . m path, see Fig.10 (C1)). This drift is caused, in part, by the accumulation ofodometry error, which is common in SLAM systems, and inpart by inaccurate manual landing.We show the estimated trajectory of position ( G p I ), rotation( G R I ), and velocity ( G v I ) in Fig. 11, Fig. 12 and Fig. 13,respectively, where the experiment starts from . s andends at . s . Our system achieves smooth state esti-mation that is suitable for onboard feedback control. All theestimated state variables agree well with the actual motions. Fig. 12. Estimation of the rotation of the UAV for dataset V1-01, an indoorUAV flight experiment of lidar-inertial navigation system.Fig. 13. Estimation of the velocity of the UAV for dataset V1-01, an indoorUAV flight experiment of lidar-inertial navigation system.
The V1-02 dataset has similar performance ( . po-sition drift) and the results are not included in this paper dueto space limit. For further experiment demonstration, we referreaders to the videos at https://youtu.be/sz ZlDkl6fA.
2) Indoor quick shake:
The second experiment is conductedin a cluttered office area (see Fig. 14 (A)). In the experiment,the UAV containing the lidar sensor and onboard computeris handheld (see Fig. 14 (B)) and quickly shaken, creating alarge rotation up to . deg/s (see onboard FPV imagesfrom Fig. 14 (A) and raw IMU measurements in Fig. 15). TheUAV ends at the starting position to enable the computationof odometry drift.Fig. 14 (C) shows the real-time mapping result on datasetV2-01. It is seen that our system achieves consistent mappingeven in fast rotational movements that are usually challengingfor visual-inertial odometry due to image defocus and/ormotion blur (see Fig. 14 (A4) and (A5)). As shown in Fig. 14(C3), the estimated final position of the UAV coincides withthe beginning position, leading to a position drift less than . (i.e., . m drift over . m path).Fig. 16, Fig. 17 and Fig. 18 show the estimates of theposition ( G p I ), Euler angles of the rotation ( G R I ) and velocity( G V I ) of the UAV, where the experiment starts from . s and ends at . s . Those estimates are changing in ahigh frequency, which is consistent with the actual motionsof the UAV. The noticeable translation around s is theactual UAV motion. We refer readers to the videos at https: Fig. 14. Real-time mapping result of dataset V2-01, an indoor quick-shakeexperiment of lidar-inertial navigation system. A: Snapshots of onboard FPVvideo; B: The UAV containing the lidar sensor and onboard computer ishandheld to creat large rotations; C: Map result, (C1) Local zoom-in map,(C2) Side view of the map, (C3) Poses of the UAV at the beginning and endof the experiment.Fig. 15. Magnitude of gyroscope measurements for dataset V2-01, an indoorquick-shake experiment of lidar-inertial navigation system. //youtu.be/sz ZlDkl6fA for further experiment demonstration.
3) Outdoor random walk:
The third experiment is con-ducted in a structured outdoor environment which is a corridorbetween a slope and the Hawking Wong building of theUniversity of Hong Kong. In the experiment, the UAV ishandheld to move along the road and then return to thebeginning position (see Fig. 19 (A)).The real-time mapping results of dataset V3-01 estimatedby our toolkit is shown in Fig. 19 (B), which clearly shows thebuilding on one side and the cars and bricks on the slope. Theposition drift is less than . (i.e., . m driftover . m path, see Fig. 19 (B3)). This extremely smalldrift, although seemly supports the efficacy of our system,should not be interpreted as the ground true drift since theactual landing cannot be made this accurate in practice. Amore indicative result is obtained from V3-02, which leads toa position drift of . . The rest results of V3-02 is verysimilar to V3-01, hence are omitted in this paper. Fig. 16. Estimation of position on dataset V2-01, an indoor quick-shakeexperiment of lidar-inertial navigation system.Fig. 17. Estimation of rotation on dataset V2-01, an indoor quick-shakeexperiment of lidar-inertial navigation system.
The estimations of the kinematics parameters are shownin Fig. 20, Fig. 21 and Fig. 22, where the experiment startsfrom . s and ends at . s . The trajectory is ap-proximately symmetric about the middle time in X and Zdirection, which agrees with the actual motion profile wherethe sensor is moved back on the same road. For furtherexperiment demonstration, we refer readers to the videosat https://youtu.be/sz ZlDkl6fA.
4) Online estimation of extrinsic, gravity, and IMU bias:
To verify our developed method being a properly functioningfilter, the online calibration parameters, which are composedof gravity in the global frame, IMU biases and the lidar-IMU
Fig. 18. Estimation of velocity on dataset V2-01, an indoor quick-shakeexperiment of lidar-inertial navigation system. Fig. 19. Real-time mapping result of dataset V3-01, an outdoor random walkexperiment of lidar-inertial navigation system. A: Photos of the environmentof this experiment; B: Mapping result, (B1) Local zoom-in map result of oneside of the road, (B2) Local zoom-in map result of the other side of the road,(B3) Poses of the UAV at the beginning and end of the experiment.Fig. 20. Estimation of position on dataset V3-01, an outdoor random walkexperiment of lidar-inertial navigation system. extrinsics have to converge. Moreover, the extrinsic estimateshould be close across different datasets with the same sensorsetup, and we can thus evaluate the extrinsics on multipledatasets and compare the values they have converged to.Fig. 23 shows the final estimate of the rotational and transla-tional parts of the extrinsics by running the proposed toolkiton all the six datasets. The initial values of the extrinsics wereread from the manufacturer datasheet. As seen in Fig. 23,the extrinsic estimates (both rotation and translation) over
Fig. 21. Estimation of rotation on dataset V3-01, an outdoor random walkexperiment of lidar-inertial navigation system. Fig. 22. Estimation of velocity on dataset V3-01, an outdoor random walkexperiment of lidar-inertial navigation system.Fig. 23. Estimation of the extrinsics between lidar and IMU for datasetsof the lidar-inertial system. The length of each line corresponds to the σ bounds the Kalman filter converged to. Red, blue, and yellow lines indicatedatasets V1 (i.e., indoor UAV flight), V2 (i.e., indoor quick-shake), and V3(i.e., outdoor random walk), respectively. For each dataset, the solid and thedashed lines respectively indicate the trial 01 (i.e., data collected in this work)and trial 02 (i.e., data from [15]). different dataset show great agreement. The uncertainty intranslation is cm − cm while that in rotation is less than ◦ .In particular, the variance of the rotational extrinsic on datasetV1 is significantly larger than the others. This is because theslow and smooth movement in the flight experiment, whichcreates insufficient excitation in parameter estimation. On theother hand, the motion profile of the two handheld experimentsV2 and V3 has much more excitation as shown previously.The other possible reason is the enlarged IMU noises dueto the constant propeller rotation in UAV flight. Moreover,as indicated by the blue lines in Fig. 23, we notice a largervariance in V2-02 than V2-01. This is resulted from the fact Fig. 24. Gravity estimation error (dark blue lines) and its σ bounds (brightgreen lines) on dataset V2-01.Fig. 25. Estimated gyroscope biases on dataset V2-01. Estimates (dark bluelines) together with the σ bounds (bright green lines) are depicted. Theestimates converge very quickly due to the large rotational movement. that V2-01 has constant excitation over . s while V2-02only ran for . s where the Kalman filter has not fullyconverged (e.g., see Fig. 24).We further inspect the convergence of the gravity estimation.Due to the space limit, we show the result on dataset V2-01only. Fig. 24 shows the gravity estimation error u = G ¯ g (cid:12) G (cid:98) g k ∈ R , where G ¯ g is the ground-true gravity vector and G (cid:98) g k is the estimate at step k . Since the ground-true gravityvector is unknown, we use the converged gravity estimation as G ¯ g . Fig. 24 further shows the σ bounds for u and is estimatedby the Kalman filter. It is shown that the error constantly fallswithin the σ bounds, which indicates the consistency of theKalman filter.Finally, we investigate the convergence of the IMU bias esti-mation. We show the results on dataset V2-01 only. The resultsare depicted in Fig. 25 and Fig. 26, where the estimates overtime are plotted together with the σ bounds. In particular, thegyroscope biases converge rapidly due to the large rotationalmovement. Also the accelerometer biases converge with suffi-cient excitation of the system. They typically converge fasteralong the gravity direction due to the large vertical movementat the beginning of the dataset (see Fig. 14).
5) Running time:
To further evaluate the practicabilityof the developed toolkit, its running time on the threedatasets V1-02, V2-02, and V3-02 are evaluated and comparedagainst [15]. Note that the work in [15] also used an iteratedKalman filter but differs with our implementations in two
Fig. 26. Estimated accelerometer biases on dataset V2-01. Estimates (darkblue lines) together with the σ bounds (bright green lines) are depicted.The accelerometer bias converges quicker along the gravity direction whichis mostly along the z-axis. aspects: (1) The iterated Kalaman filter in [15] is manuallyderived and the respective matrices (e.g., F x τ , F w τ in (35))used in the Kalman filter are directly coded. Matrices sparsityare carefully exploited for computation efficiency. In contrast,our implementation directly uses the toolkit which separatesthe computation of manifold-specific parts and system-specificparts; (2) The original work in [15] did not consider theestimate of extrinsic between lidar and IMU, hence has sixfewer state variables. Other than these two aspects, the restimplementations are identical. Both implementations are testedon the UAV onboard computer (see Fig. 9).The running time comparison is shown in Table. III, whichshows the average time for completing one step of Kalmanfilter (both propagation and update). As expected, the toolkit-based implementation takes more computation time due tothe higher state dimension and the toolkit overhead. However,this time overhead is acceptable and both implementations runsufficiently fast in real-time. TABLE IIIC
OMPARISON OF THE AVERAGE RUNNING TIME
IKFoM -based implementation Hand-derived implementationin [15]V1-02: . ms . ms V2-02: . ms . ms V3-02: . ms . ms Running time is recorded as the time consumed by one propagation stepand one update step.
VIII. C
ONCLUSION
This paper proposed a canonical representation of robotsystems and developed a symbolic error-state iterated Kalmanfilter. The canonical representation employs manifolds to rep-resent the system states and uses (cid:1) \ (cid:12) and ⊕ operationsto describe the system model. Based on the canonical rep-resentation of a robotic system, we showed the separationprinciple between the manifold-specific descriptions and thesystem-specific descriptions in a Kalman filter framework.This separation enables us to encapsulate manifold structures into Kalman filters by developing a C ++ toolkit, facilitatingthe quick deployment of Kalman filters to generic roboticsystems operating on manifolds. The proposed method andthe developed toolkit are verified on a tightly-coupled lidar-inertial navigation system in three different scenarios.A PPENDIX
A. Partial differentiation of SO (3) Lemma 1. If x , y ∈ SO (3) , u , v ∈ R , then ∂ ((( x (cid:1) u ) ⊕ v ) (cid:12) y ) ∂ u = A ((( x (cid:1) u ) ⊕ v ) (cid:12) y ) − T Exp ( − v ) A ( u ) T where the operations (cid:1) , (cid:12) , and ⊕ are defined in (6) and A ( · ) is defined in (7).Proof. Denote w = (( x (cid:1) u ) ⊕ v ) (cid:12) y , we have Exp ( w ) = y − · x · Exp ( u ) · Exp ( v ) Hence a small variation ∆ u in u causes a small variation ∆ w in w , which is subject to Exp ( w + ∆ w ) = y − · x · Exp ( u + ∆ u ) · Exp ( v ) (57)Using the fact Exp( u +∆ u ) = Exp( u ) · (cid:16) I + (cid:98) A ( u ) T ∆ u (cid:99) (cid:17) as shown in [33], it is derived that the left hand side of (57) Exp ( w +∆ w ) = Exp( w ) · (cid:16) I + (cid:98) A ( w ) T ∆ w (cid:99) (cid:17) and the right hand side of (57) y − · x · Exp ( u + ∆ u ) · Exp ( v )= y − · x · Exp ( u ) · (cid:16) I + (cid:98) A ( u ) T ∆ u (cid:99) (cid:17) · Exp ( v )= Exp ( w ) Exp ( − v ) · (cid:16) I + (cid:98) A ( u ) T ∆ u (cid:99) (cid:17) · Exp ( v ) Equating the two sides of (57) leads to A ( w ) T ∆ w = Exp ( − v ) · A ( u ) T ∆ u and as a result, ∂ ((( x (cid:1) u ) ⊕ v ) (cid:12) y ) ∂ u = ∆ w ∆ u = A ( w ) − T Exp ( − v ) A ( u ) T (cid:4) B. Partial differentiation of compound manifolds
Lemma 2. If x , y ∈ S ; x , y ∈ S ; u , v ∈ R n and u , v ∈ R n ; where n , n are dimensions of S , S respectively, define compound manifold S = S × S , andits elements x = (cid:2) x x (cid:3) T ∈ S ; y = (cid:2) y y (cid:3) T ∈ S ; u = (cid:2) u u (cid:3) T ∈ R n + n and v = (cid:2) v v (cid:3) T ∈ R l + l ,where l = n when S is a Lie group and l = n when S is a Lie group, then ∂ ((( x (cid:1) S u ) ⊕ S v ) (cid:12) S y ) ∂ u = ∂ ((( x (cid:1) S u ) ⊕ S v ) (cid:12) S y ) ∂ u ∂ ((( x (cid:1) S u ) ⊕ S v ) (cid:12) S y ) ∂ u Proof.
Define w = (( x (cid:1) S u ) ⊕ S v ) (cid:12) S y , then according tothe composition of operation (cid:1) , (cid:12) and ⊕ in (14), we have w = (( x (cid:1) S u ) ⊕ S v ) (cid:12) S y = (cid:18)(cid:18)(cid:20) x x (cid:21) (cid:1) S (cid:20) u u (cid:21)(cid:19) ⊕ S v (cid:19) (cid:12) S y = (cid:18)(cid:20) x (cid:1) S u x (cid:1) S u (cid:21) ⊕ S (cid:20) v v (cid:21)(cid:19) (cid:12) S y = (cid:20) ( x (cid:1) S u ) ⊕ S v ( x (cid:1) S u ) ⊕ S v (cid:21) (cid:12) S (cid:20) y y (cid:21) = (cid:20) (( x (cid:1) S u ) ⊕ S v ) (cid:12) S y (( x (cid:1) S u ) ⊕ S v ) (cid:12) S y (cid:21) (cid:44) (cid:20) w w (cid:21) As a result, the differentiation is ∂ w ∂ u = (cid:34) ∂ w ∂ u ∂ w ∂ u ∂ w ∂ u ∂ w ∂ u (cid:35) = (cid:34) ∂ w ∂ u ∂ w ∂ u (cid:35) = ∂ ((( x (cid:1) S u ) ⊕ S v ) (cid:12) S y ) ∂ u ∂ ((( x (cid:1) S u ) ⊕ S v ) (cid:12) S y ) ∂ u (cid:4) R EFERENCES[1] F. L. Markley, “Attitude error representations for kalman filtering,”
Journal of guidance, control, and dynamics , vol. 26, no. 2, pp. 311–317, 2003.[2] N. Trawny and S. I. Roumeliotis, “Indirect kalman filter for 3d attitudeestimation,”
University of Minnesota, Dept. of Comp. Sci. & Eng., Tech.Rep , vol. 2, p. 2005, 2005.[3] F. L. Markley and J. L. Crassidis,
Fundamentals of spacecraft attitudedetermination and control . Springer, 2014, vol. 33.[4] F. M. Mirzaei and S. I. Roumeliotis, “A kalman filter-based algorithmfor imu-camera calibration: Observability analysis and performanceevaluation,”
IEEE Transactions on Robotics , vol. 24, no. 5, pp. p.1143–1156, 2008.[5] J. Kelly and G. S. Sukhatme, “Visual-inertial sensor fusion: Localiza-tion, mapping and sensor-to-sensor self-calibration,”
The InternationalJournal of Robotics Research , vol. 30, no. 1, pp. 56–79, 2011.[6] J. Sola, “Quaternion kinematics for the error-state kalman filter,” arXivpreprint arXiv:1711.02508 , 2017.[7] M. Kleinert and S. Schleith, “Inertial aided monocular slam for gps-denied navigation,” in , 2010, pp. 20–25.[8] A. I. Mourikis and S. I. Roumeliotis, “A multi-state constraint kalmanfilter for vision-aided inertial navigation,” in
Proceedings 2007 IEEEInternational Conference on Robotics and Automation , 2007, pp. 3565–3572.[9] M. Li and A. Mourikis, “High-precision, consistent ekf-based vi-sual–inertial odometry,”
The International Journal of Robotics Research ,vol. 32, pp. 690–711, 05 2013.[10] S. Lynen, M. W. Achtelik, S. Weiss, M. Chli, and R. Siegwart, “A robustand modular multi-sensor fusion approach applied to mav navigation,”in , 2013, pp. 3923–3929.[11] M. Bloesch, M. Burri, S. Omari, M. Hutter, and R. Siegwart, “Iter-ated extended kalman filter based visual-inertial odometry using directphotometric feedback,”
The International Journal of Robotics Research ,vol. 36, no. 10, pp. 1053–1072, 2017.[12] Z. Huai and G. Huang, “Robocentric visual-inertial odometry,” in . IEEE, 2018, pp. 6319–6326.[13] J. A. Hesch, F. M. Mirzaei, G. L. Mariottini, and S. I. Roumeliotis, “Alaser-aided inertial navigation system (l-ins) for human localization inunknown indoor environments,” in . IEEE, 2010, pp. 5376–5382.[14] C. Qin, H. Ye, C. E. Pranata, J. Han, S. Zhang, and M. Liu, “Lins: Alidar-inertial state estimator for robust and efficient navigation,” in .IEEE, 2020, pp. 8899–8906. [15] W. Xu and F. Zhang, “Fast-lio: A fast, robust lidar-inertial odome-try package by tightly-coupled iterated kalman filter,” arXiv preprintarXiv:2010.08196 , 2020.[16] G. Lu and F. Zhang, “Imu-based attitude estimation in the presenceof narrow-band noise,” IEEE / ASME Transactions on Mechatronics ,vol. 24, no. 2, pp. 841–852, 2019.[17] J. Lin, X. Liu, and F. Zhang, “A decentralized framework for simulta-neous calibration, localization and mapping with multiple lidars,” 2020.[18] G. P. Huang, A. I. Mourikis, and S. I. Roumeliotis, “A quadratic-complexity observability-constrained unscented kalman filter for slam,”
IEEE Transactions on Robotics , vol. 29, no. 5, pp. 1226–1243, 2013.[19] S. Huang and G. Dissanayake, “Convergence and consistency analysisfor extended kalman filter based slam,”
IEEE Transactions on robotics ,vol. 23, no. 5, pp. 1036–1049, 2007.[20] M. S. Grewal, L. R. Weill, and A. P. Andrews,
Global positioningsystems, inertial navigation, and integration . John Wiley & Sons, 2007.[21] K. M. Lynch and F. C. Park,
Modern Robotics . Cambridge UniversityPress, 2017.[22] A. R. Klumpp, “Apollo lunar descent guidance,”
Automatica null . IEEE, 2003, p. 1403.[24] C. Hertzberg, R. Wagner, U. Frese, and L. Schr¨oder, “Integrating genericsensor fusion algorithms with sound state representations through encap-sulation of manifolds,”
Information Fusion , vol. 14, no. 1, pp. 57–77,2013.[25] C. Hertzberg, “A framework for sparse, non-linear least squares prob-lems on manifolds,” 2008.[26] V. W¨uest, V. Kumar, and G. Loianno, “Online estimation of geometricand inertia parameters for multirotor aerial vehicles,” in . IEEE, 2019,pp. 1884–1890.[27] R. K¨ummerle, G. Grisetti, H. Strasdat, K. Konolige, and W. Burgard,“g 2 o: A general framework for graph optimization,” in . IEEE, 2011,pp. 3607–3613.[28] S. Agarwal, K. Mierle, and Others, “Ceres solver,” http://ceres-solver.org.[29] R. Wagner, O. Birbach, and U. Frese, “Rapid development of manifold-based graph optimization systems for multi-sensor calibration and slam,”in . IEEE, 2011, pp. 3305–3312.[30] G. Grisetti, R. K¨ummerle, C. Stachniss, and W. Burgard, “A tutorial ongraph-based slam,”
IEEE Intelligent Transportation Systems Magazine ,vol. 2, no. 4, pp. 31–43, 2010.[31] M. Burri, M. Bloesch, Z. Taylor, R. Siegwart, and J. Nieto, “Aframework for maximum likelihood parameter identification appliedon mavs,”
Journal of Field Robotics , vol. 35, no. 1, pp. 5–22, 2018.[Online]. Available: https://onlinelibrary.wiley.com/doi/abs/10.1002/rob.21729[32] R. M. Murray, Z. Li, S. S. Sastry, and S. S. Sastry,
A mathematicalintroduction to robotic manipulation . CRC press, 1994.[33] F. Bullo and R. M. Murray, “Proportional derivative (pd) control onthe euclidean group,” in
European control conference , vol. 2, 1995, pp.1091–1097.[34] C. Hertzberg, R. Wagner, U. Frese, and L. Schr¨oder, “Integratinggeneric sensor fusion algorithms with sound state representationsthrough encapsulation of manifolds,”
CoRR , vol. abs/1107.1119, 2011.[Online]. Available: http://arxiv.org/abs/1107.1119[35] B. M. Bell and F. W. Cathey, “The iterated kalman filter update asa gauss-newton method,”
IEEE Transactions on Automatic Control ,vol. 38, no. 2, pp. 294–297, 1993.[36] J. Lin and F. Zhang, “Loam livox: A fast, robust, high-precision lidarodometry and mapping package for lidars of small fov,” in2020 IEEEInternational Conference on Robotics and Automation (ICRA)