Multi-Sensor State Estimation Fusion on Quadruped Robot Locomotion
11 Multi-Sensor State Estimation Fusion on QuadrupedRobot Locomotion
Chen Yao, Zhenzhong Jia,
Abstract —In this paper, we present a effective state estima-tion algorithm that combined with various sensors information(Inertial measurement unit, joints encoder, camera and LIDAR)to enable balance the different frequency and accuracy for thelocomotion of our quadruped robot StarDog. Of specific interestis the approach to preprocess a series of sensors measurementwhich can obtain robust data information, as well as improve thesystem performance in terms of time and accuracy. In the IMU-centric leg odometry which fuse the inertial and kinematic data,the detection of foot contact state and lateral slip are consideredto further import the accuracy of robot’s base-link posiiton andvelocity. Before use the NDT registration to align the point clouds,we need to feed the requirement by preprocessing these data andobtaining initial localization in map. Also a modular filtering isproposed which can fuse leg odometry based on the inertial-kinematics state estimator of the quadruped robot and LIDAR-based localization method. This algorithm has been tested fromthe experimental data at indoor building and the motion capturesystem is used to evaluate the performance, the experimentalresults can demonstrate the feasibility of low error and highfrequency for our quadruped robot system.
Index Terms —quadruped robot, state estimation, preprocessdata, modular filter
I. INTRODUCTION Q UADRUPED robot is widely used in exploration, rescue,transportation, etc [10]. Our purpose is to achieve au-tonomous navigate and provide rescue assistance in the disas-ter building environment for the quadruped robot that can reli-ably cross obstacles, explore the harsh unknown environment.In order to complete the task autonomously and effectively,the robot requires to provide accurate information to localizeitself by the estimation algorithm, also it must be able toperceive the environment to build accurate environmental mapand execute reliable operations. At the same time the moreattentions are paid for accurate and reliable state estimationwhich can promote the control systems in the feedback loop byregulating the parameters, planning the trajectories, balancingcontrolled, adjusting later slip [15], etc.However, there still exists random errors and solution driftsin the inertial sensor, impulsive shock and vibration causedby the different trait, the low frequency sensors as well ashigh frequency demand for control system, which makes theaccurate and reliable state estimation a key challenge problem.We take advantage of a series of sensors which includeinertial, kinematic, vision and LIDAR measurement to presenta modual state estimator for quadruped robot in locomotion. To
The authors are with the Robotics Institute, Southern University of Scienceand Technology, shenzhen, China. [email protected],[email protected]
Fig. 1.
Left:
The quadruped robot with 18 degrees of freedom in testenvironment and equipped with different sensors.
Right:
StarDog Robotplatform shown in RVIZ and can sense the surroundings by LIDAR. the best of our knowledge, this paper is the first to combine legodometry and NDT localization on quadruped robot, as wellas preprocess a variety of sensors to promote the performance.The main contributions of this paper are as follows: • The calibration for camera and robot base-link just basedon laser track and camera is proposed in this paper. Lasertracker can provider high accuracy data and doesn’t beaffected by the sight occlusion, also the operation doesn’tneed robot to move at all. • We present the method of detection for foot contact, andmainly take consideration of early or late contact. Thelateral drift or slip will be considered to compensate thevelocity for the robot control system. The data fusionbased on EKF and can obtain leg odometry. • Through the preprocessing the point clouds and initiallocalization information, we improve the effectiveness ofmatching/aligning and can use NDT method to obtain thetransformation in the movement of quadruped robot. • We experimentally establish that the fuse with NDTapproach and leg odometry provide the state estimationwith low error and high frequency in locomotion.The structure of this article as follows: Sec. II reveals thestate-of-the-art research of state estimation. The quadrupedrobot platform and sensors scheme are described in Sec. III.Our main algorithm is detailed in Sec. IV which includediscussing the different modules and processing method forsensors. The experimental evaluation is present in Sec. V todemonstrate the performance and the conclusion and futureplan are shown in Sec. VI.II. RELATED WORKFor the past few years, a quite lot of remarkable workhave been shown and these quadruped robots reveal excellentmobility, such as Bostion Dynamics BigDog [18] and LS3 [14] a r X i v : . [ c s . R O ] J u l robot, MIT cheetah3 [4], ETH ANYmal [1], IIT HYQ [19],and a few more useful applications in complex environmentare present. State estimation have implemented on quadrupedrobot for a series of significant literature.Proprioceptive sensors for state estimation mainly focuson estimator. Notably, ETH [6] proposed a base full stateestimation based on the fusion of leg encoders and IMU datathrough an error EKF method [21], joint kinematics was usedto correct the state update, also the objectivity of the filter wasanalyzed. Although Cheetah3 [4] mainly focus on control andplanning, they present a robust estimator through a two-stagesensor fusion that decouples the orientation from estimationof position and velocity.State estimation only rely on proprioceptive sensors andthese sensors model leads to cumulative increase of errorand give rise to drift, as well as obtain the inaccurate map.Typically, the estimation of fusing proprioceptive sensorscant be used for navigation, so the exteroceptive sensors areintroduced for a lower drift.Exteroceptive sensors are always used in SimultaneousLocalization And Mmapping (SLAM) for state estimation[7]. Specifically, Camera and visual odometry doesnt be usedfor our proposed state estimation for there always failure indynamic vibration and varying light conditions.Instead, LIDAR has become popular and can preciselysense in a long distance. One of the most commonly popularapproach for point clouds registration is the Iterative ClosestPoint (ICP) [2], but these point clouds need sufficient overlapbetween two point clouds data. The Normal DistributionsTransform (NDT) [3] compares the probability between pointclouds and built map to obtain the transform matrix. TheLIDAR odometry and Mapping (LOAM [26]) approach exposeexcellent and effective performance and the two-way methodcan robust obtains the odometry and mapping. Lightweightand Ground-Optimized Lidar Odometry and Mapping (LeGO-LOAM [20]) developed LOAM with loop closure and a twostage optization.Fusing with different sensors can reliably improve the per-formance for state estiimation. Chilian et al. in [9] mainly useinertial, leg odometry and visual odometry on a slowly crawsix-legged robot. As Ma et al. in [14],[13] described, theirsystem fused stereo-camera, inertial, kinematic and optionalintermittent GPS to mainly maintain an accurate, modular six-degree-of-freedom pose estimate of LS3 robot and evaluate thetest extensive with 1 error per distance traveled. In Fallonspaper[11], inertial, kinematics, LIDAR and stereo vision areprobabilistic fused to produce a single consistent estimatenamed Pronto for Atlas robot, also a particle filter is used tolocalization, this estimator can achieve 2cm per 10 steps driftin quasi-static experiment. The Pronto is also implemented forHYQ and WALK-MAN [23], and a combination of the ProneKinematics-inertial state estimator and LOAM based on thefull-size humanoid robot WALK-MAN to perform low drift,high frequency estimation. Nobili et al. in [15],[16] combineinertial, kinematic, stereo vision and LIDAR measurements toproduce a modular inertial-driven state estimator which can bedirectly used to controller. they still compare the performanceof MEMS IMU and fiber optic IMU. Based on the recently Fig. 2. The hardware/software framework of StartDog, there lists all thespecifications seosors of the system, also the communication can send andreceive through WIFI or network. proposed COCLO (Contact-Centric Leg Odometry) method[25], this new estimator used a Square Root Unscented KalmanFilter to fuse multiple proprioceptive sensors and this contact-centric approach estimate velocity more precisely. Factorgraph optimization [24] method is presented to tightly fusesand smooths inertial, leg odometry and visual odometry. Aswell as reduce the dependency on foot contact classifications.Inspired by these outstanding works mentioned above,our approach aims to present a feasible fusion style ofloosely-coupled EKF and provide robust state estimation forquadruped robot.III. SYSTEM OVERVIEWIn this chapter a robust multi-sensor fusion system frame-work is built to meet our requirement. Then we pay moreattention on the sensor scheme design, which include reason-able lectotype, notations, calibration and data-processing.
A. System architecture
The StartDog is a modular electric quadruped robot, whichsupports torque, position and speed controlled. Fig 1 shows theview of our StarDog robot. The platform weights about kg while can carry 10 kg payload, and its overall size is about m × . m × . m . Each leg has 3 degrees of freedom, atthe same time can achieve a large range of motion space andshow high mobility, especially these knee joints can rotate 360degree around.The hardware and software framework of our StartDog areanalyzed specifically in here and as shown in Fig 2 . In orderto facilitate the real-time realization for robot autonomousmovement, our framework is divided into remote-controlledand robot-controlled. The remote communicates with robotthrough WiFi, we can observe the simulation model whileusing the xbox to send order to the real robot. on the anotherword, the on-board computer of NUC and Nvidia TX2 areused to communicate in a local network bu ROS. Nvidia TX2acquires data from the zed camera and foot sensors to sense,detect, local mapping, etc. The data of IMU, LIDAR, and jointencoders are mainly processed and achieve the locomotioncontrol, state estimation, path planning on NUC computer. Fig. 3. Sensors configuration for StartDog.
Left:
Sensors installed positionand the fields of sensors’ view.
Right: the sensors coordinate system as shownin RVIZ.
B. Sensor Scheme Design
In order to obtain accurate information of the quadrupedrobot, a series of relatively stable sensors should be considered.Such as proprioceptive (Encoders, IMU, and force sensors) andexteroceptives (Cameras and lidars) sensors.All 12 leg joints of quadruped robot are equipped withabsolute encoder (RLS encoder), which connect through Ether-CAT and have a high frequency of 400H and high-resolution.Considering the cost and practical use, our robot adopts theXsens MTi-G-710 (IMU) to detect the base-link accelerationand angular velocity. The air pressure sensor can detect thecontact of the robot’s foot state, also take into account theshock absorption and anti-skid performance. Here we useVLP-16 LIDAR sensor to obtain high precision distance pointclouds information for localization. The frequency is onlyabout 10HZ though it can generate 30,000 points per second.In order to further enhanced the performance and the fixedintegration of the sensors system, the configuration of thesensor coordinate system are set and as shown in Fig. 3.
C. Sensor Notations
In this paper, the mathematical notations would be used asfollow. Matrices will be represented by bold letters, and thecoordinate frames will be set as a different lower-case normalitalic. For example, wb T can represents as a homogeneoustransformation of frame b corresponding to frame w . Theabsolute orientation is represented as a quaternion q w .The direction selection of the coordinate system uses theright-hand criterion. We set a coordinate system of the robotbase link at the center of mass. Hence, the z-axis is directedupward and perpendicular to the upper plane, also the x-axisis directed forward when relative to robot framework as b T .The world frame is represented as w T . lidar T and cam T arethe representation of the LIDAR and camera frame . D. Sensor Calibration
Measurement noise of each sensor is inevitable, and theaccumulated error is difficult to be completely corrected. Al-though the transformation between coordinate systems can beeasily obtained according to the CAD model and installationdesign, it’s still unable to obtain relatively accurate sensorinformation due to such as the assembly tolerance of everyparts, properties of materials, impact of motion, etc. In orderto ensure the reliability of the input raw data, the appropriatesensors calibration method deserves serious consideration forintrinsic and extrinsic parameters of each type of sensor.
Fig. 4. Calibration method between the camera and robot base-link coordi-nates.
When build local map or get localization data, we can obtainprecise environmental information through camera and convertto global world coordinate system for deep process. The mostimportant thing is to know the coordinate between cameraand robot base link [8] . The
Radian laser tracker, cameraand designed calibration board are used here to reduce theinfluence of various possible noise or error, at same time, wecan further achieve the high accurate coordinate relationshipwhich can be seen in Fig. 4.A certain number of target balls are placed on the sophisti-cated designed calibration board and test board, so the relativepose relationship between target ball and these boards can seein Fig. 5. The coordinate data of the 6 target balls at differentposture can be accurately obtained under the laser tracker, thenwe determine the direction by the three points criterion. Visualrecognition are used to obtain the camera and calibration boardconversion coordinate by the PNP matching the images inputof the left camera. When we input the photos at differentposture, the relative transformation of the pose relationshipis finally obtained through iterative calculation: basecam T = basebar T · bar laser T · laserbar T · bar board T · boardcam T (1) E. Sensor Data-Processing
Data processing mainly involves sensors synchronizationand will due to the large difference in frequency, bandwidthand interface signals. Here we mainly talk about the syn-chronization of fusing feet sensors, IMU and encoders atthe same time event, of which can provide effective datafor state estimation. The software synchronization scheme isadopted by using the time synchronization that comes formROS (Robot Operating System). By using this method, wecan realize the synchronous output of messages from differentsource information under the same time stamp, also it can bemore flexible and inexpensive compare to synchronization viahardware method such as FPGA.IV. APPROACHThe autonomous navigation of the quadruped robot requireslow-latency and accurate state elements estimation, especiallythe accuracy of the velocity estimation in controller, so thecare should be taken for drift error and high frequency as muchas possible. Our approach can realize a state estimator onlybased on the measurement of proprioceptive sensors of IMU
Fig. 5.
Left:
The actual target ball and calibration board, we record andreplace ball posture randomly;
Right:
The corresponding specifications of thedesigned coordinate betweent the first ball and the corner in calibration board. and kinematic with a higher frequency. The external sensorof LIDAR and camera can also promise the further accuracyof state measurement and ensure the redundancy, but theirfrequency can be very low. Therefore, it is necessary to verifythe trade-off between frequency and accuracy and design acomplementary frame combine with these sensors.
A. Leg Odometry
In this section, we overview an inertial-kinematic estimatordeveloped from [6]. The state estimation elements of thequadruped robots base link are defined as follow: X = (cid:2) wb P w V b w b θ b a b ω (cid:3) The 15 state elements ( 3D position wPb , velocity w V b andorientation wb θ ) are all expressed in the world coordinate.The velocity can also expressed in the base link coordinate(when robot not move and the foot velocity is zero, velocity w V f equivalent to w V b ). The orientation refers as Euler angler(quaternion can be transformed through exponential coordi-nate). At the same time IMU can deal with online calibrationof acceleration and angular velocity biases b a and b ω byEKF, so the data of acceleration and angular velocity can becorrected in every update loop. IMU estimation:
We use Mean value theorem of inte-grals [17] to provide the prediction state estimation of currentpose via two IMU frames angular and acceleration velocity. I C wb ( t + ∆ t ) = I C wb ( t ) Exp ( w (cid:48) t ( t )∆ t ) I V wb ( t + ∆ t ) = I V wb ( t ) + a (cid:48) t ∆ t I P wb ( t + ∆ t ) = I P wb ( t ) + I V wb ( t )∆ t + 0 . a (cid:48) t ( t )∆ t (2)where I V wb and I P wb are the IMU velocity and position,respectively, I C wb represents the homogeneous transformationmatrix of attitude which can gain by quaternion exponentialoperation. ∆ t means the timestamp from the IMU frequency. w (cid:48) t and a (cid:48) t will be taken as the average angular and linearvelocity at time t while can get form the mean value at thetime t and t + ∆ t . Fig. 6. contact state modes of the finite state machine (FSM). Foot Estimation:
The foot sensors use AD sampling toobtain data and get the corresponding air pressure value. Oncethe pressure value reaches the threshold value of the verticalcomponent, the corresponding sensor state is identified.Considering these bad actual surface conditions, the move-ment always need reliable contact with the ground. The forcesensor-based method has too much noise, and the accuracyof the threshold completely cannot be guaranteed. Also thegait phase is just an ideal foot state generated base on time.Early or late contact also will lead to instability in walkinglocomotion. Therefore, it is necessary to detect the contactstate more precise by adding the event trigger of contact [5].Combining contact detection of the foot sensors and gaitphase, a finite state machine (FSM) for each leg movement isdefined in Fig. 6. We divide the contact state into 6 modes: contact and early swing, contact and swing, contact and lateswing, swing and early contact, swing and contact, swing andlate contact . Black line indicate the planned gait phase andother models can get from the comparision of foot sensorsand gait phase. Fusing these data then we can obtain accurateestimates of which foot is in contact with the ground.By selecting the trot gait, the robot can walk forward underthe flat ground. The swing and support time are both set 0.3sin the actual experiment. Fig. 7 shows the relevant parameters(force, z-axis position and state) of the left front leg.
Fig. 7. Correlation state of left foreleg in trot gait. Leg Kinematic:
The robot base position can computefrom the angle of joints when taking the average of thecombined legs which in reliable contact with the fixed ground.According to the characteristics of quadruped robot locomo-tion, we propose the following leg kinematic methods whichselect the change amount between the joints kinematic data ingiven frame to calculate the base position (see Alg. 1, Alg. 2).The last frame will be seem as the given frame in eachtimestamp in algorithm 1, so we can get the change amount and transform the data form base link coordinate to worldcoordinate. Add it to the initial position, the final position ofbase link in world coordinate will obtained.
Algorithm 1
Kinematic estimation for each timestamp data
Input : joints data α , foot sensor data s f j , plan gait p f j , theorientation of IMU data I w , timestamp i , leg j . Set the initial startup position as K P wb . if ContactDetect( s f j , p f j ) = true then return Leg contact
F C j = 1 . end if Get the leg position change amount at each timestamp for ∆ P bfi = F kin ( α i +1 ) − F kin ( α i ) Combined the contact state and each leg change Obtain the position change of the robot base link ∆ P bfi = − (cid:80) j =1 F C j ∆ P bfj . Transform the position in world frame at each timestamp ∆ P K − i = C wb ∆ P bfi = Exp ( w ∆ t )∆ P bfi . Compute the position of base link K P w b + = ∆ P K − i . return K P w b .In algorithm 2, the initial frame of each leg connectedwith the ground can also be seen as the given frame in eachtimestamp. The base link position can get update throughsetting the switch of each leg as a conversion volume. Algorithm 2
Kinematic estimation for each leg switch data
Input : joints data α , foot sensor data s f j , plan gait p f j , theorientation of IMU data I w , timestamp i , leg j . Set the initial startup position as K P wb . if ContactDetect( s f j , p f j ) = true then return Leg contact
F C j = 1 . return switch joint angle α f = α i end if Get the leg position change amount at each timestamp for ∆ P bfi = F kin ( α i +1 ) − F kin ( α f ) Obtain the position change of the robot base link ∆ P bfi = − (cid:80) j =1 F C j ∆ P bfj . Transform the position in world frame at each timestamp ∆ P K − i = C wb ∆ P bfi . if foot state of last frame is switch then return conversion volume f K + = P K − i end if Compute the position of base link K P w b = f K + p K − i . return K P w b .Alg. 1 mainly consider the change amount between 2nearest frames, so the error only exits in these 2 frames. At thesame time, the new frame and the initial frame which relatedto switched foot are used in Alg. 2 and the error will be theaccumulation in a gait cycle. for the low error and reliablechange amount, we choose Alg. 1 as the Kinematic estimationmethod to get the robot base-link position.The differential derivative calculation of Jacobian methodwill enlarge the base link velocity error for the movementimpact, also the friction resistance, the installation clearance of the joints, and the sensor reading error are the very influentialfactor. Here we propose a technique to modify the base-linkvelocity. Linear velocity of robot base-link takes considerationof not only position at different timestamp [22], but alsothe lateral slide in dynamic locomotion, so the accurate andsmooth estimate forum as follows: K V wb = K P wb ( t ) − K P wb ( t − nT ) nT + I ¯ wR l (3)Where R l is the distance between base-link and the tipof the foot by the kinematics, T represents the interval oftimestamp, we choose n =15 as the experiment parameter. Leg Odometry:
The leg odometry is always affectedby the proprioceptive sensors, for there exists the calculationerror, as well as the significant kinematic drift and noise inthe IMU readings or integrate.By making use of the reliable sensors readings and fusingthese information to improve the performance, we designthe EKF framework to calculate the robot base-link stateestimation which inspired by [20],[21]. For the magnitudeof these parameters often differs greatly and tends to causedivergence or failure, so the error value is used as a statevariable to ensure the input state is a relatively small amount,The error parameters are set as (cid:2) δP δV δθ δb a δb ω (cid:3) .Through the establishment of prediction and update modelequations, the data fusion framework is constructed to achievea more accurate estimation. The global foot contact position isnot reliable obtained for the moment controlled is consideredinstead of position controlled, also we cant get this data formmotion capture system, as a result, the error of joint kinematicsand IMU provide the update of state vector other than footplacement [6].As shown in Fig. 8, the feedback correction method isselected to obtain the leg odometry. The feedback valuecalculated by EKF will be directly fed back to the IMUprediction model to compensate for the state.This method can further reduce drift and the calculationresult can be more accurate. Fig. 8. Flow chart of fusion method for Leg odometry.
B. LIDAR-based Estimation
The leg odometry can be obtained jsut under its initialmotion coordinate (we refer to odom coordinate system), butit exists severely drift over time and lead to relatively largeerrors as if not use any exteroceptive sensors to improvethe performance, also the rotation around z-axis can notobservable [6]. Camera and visual odometry doesnt be used for our proposed state estimation, for there always failure indynamic locomotion and low light environment. Visual moduleonly provides the initial localization and detection here. At theend more efforts have to be focused on LIDAR.In the following we define the LiDAR-based localizationapproach for the robot base link state estimation respected toa map frame.To deal with the information from the LIDAR sensor, weanalyze various of 3D laser point cloud matching methods andchoose the Normal Distributions Transform (NDT) registrationto achieve pose estimation. Input parameters include the laserpoint cloud, predictable motion estimation, initial localizationand global map, processed by NDT algorithm of scanningalignment and finally output the global state. Laser point cloud matching:
The laser point cloud hasalmost 30,000 raw points per scan, In addition, point cloudsdistortions and cluttered points always exist in the scanningprocess. These interferences will cause too long matchingtime, weak matching and need to optimize through the pre-filtering, which can leverage the number of false alignmentsand ensure the trade off of the match efficiency and accuracy.The processing of the laser point cloud shown as follow:
Fig. 9.
Left: the raw point cloud in a simulation environment. right: the pointcloud after outlier removed. A. Remove outlier point cloud
The raw point clouds can be divided into ground, clusterand outlier according to environmental characteristics [20].To compute the angle change among the extracted pointclouds, the point cloud with a small angle will be set as theground point cloud [12]. After removing the ground, we clusterthe point cloud through judge the angle of the vertical distancebetween the two point clouds to allocate labels. Whats more,ignore clusters with a small number of point clouds whichdefined to the outlier point cloud. The comparision of thedifferent processing results are shown in the Fig 9.B.
Downsample point cloud
A large number of laser point clouds will also lead to match-ing delay and error, so it is necessary to carry out reasonabledown sampling to reduce the number of the processed pointcloud. After a series of test and compare, we decide to savenearly half of the processed point clouds, while can retain theshape and details of point cloud well.C. limited point cloud in a fix range
The point cloud which is too far away will lead largedistortion and false recognition, so the laser point cloud withinthe range of . − m is retained here. Initial localization:
The global map is built throughthe mapping method with the pre-proccessing point clouds.Initial localization can determine the initial posture of the robot base link relative to this global map after starting therobot, and there are different acquisition methods for obtaingthis transformation matrix. For the simulation environment,the initial posture can obtain directly by subscribing the modelstate topic during the initialization process. In order toensure the initial pose can be reliably and stably obtainedin both simulation and indoor real environment, we choose
Apriltag as an effective method.In the process of mapping initialization, the posture of theimage in the map coordinate system mapcam T can be obtained.Similarly, we can also know the posture of the image in themotion initialization process odomcam T . By different coordinatetransform can we measure the relative initial posture formodom to map coordinate mapodom T , the corresponding coordinateare shown in Fig 10. Fig. 10. The relationships of coordinate system.
From initial localization we can also obtain the transfor-mation mapodom T , and the transformation odombase T can directly getfrom leg odometry in the odom coordinate system. Therefore,the information of leg odometry mapbase T shown in map coordi-nate system for later deal and compare. NDT localization:
After select pre-processing pointcloud and obtain initial posture in the map, we need toprovide a better prediction motion between two frame laserpoint clouds. The measurement of leg odometry mapbase T lo areused to calculate the increment and gradually accumulate theprevious pose when we use NDT matching method and setthe corresponding parameters. C. Sensor Fusion
In order to balance the accuracy and real-time for thestate estimation, it is necessary to consider and integrate theapproach of the proprioceptive and exteroceptive sensors, andensure the relatively high-frequency and high-precision outputthrough the module multi-sensor fusion method. This fusionframework also need to further optimize the trajectory andprovide the smoothness to prevented matching failure.Gathered sensors data from the measurement systemsin robot locomotion, We correct the state estimate of thekinematic-inertial odometry by the LiDAR data associationin a loosely coupled style. A summary of the sensors data andneeded state components of modules in the EKF fusion canbe found in the table I, so we can clearly track the flow ofinformation among these modules.
Fig. 11. the block diagram for state estimation structure, the input data are all come from sensors and the position, velocity and orientation of robot base-linkin map coordinate will be obtained.
The multi-sensor fusion framework is shown in Fig. 11,the red box indicates the preprocessing of different sensordata, and the flow of information among different modulesrepresent the transmission of 6-DoF state estimation. At eachiteration of leg odometry, the IMU prediction will be sendto control system to minimize latency, also it provides arelatively stable measurement. LIDAR registration and initiallocalization of visual are all run in much lower frequencies,so they are used t correct the global localization. By usingthe EKF filtering method, we obtain a relatively stable andaccurate state estimation system lastly.In the architecture of state estimation for our quadrupedrobot, we use different sensors and consider both frequencyand accuracy. The total output trajectory of sensor data can beillustrated in Fig 12. When there have higher-precision sensordata, the data in the filter will be updated and the subsequentmeasurement trajectory need to be recalculated, otherwise thesensor data of relatively high accuracy will always be used.In our loosely-coupled EKF method, the update must have themost highest accuracy and the most slowest frequency throughthe data combination and arrange.
Fig. 12. The input sensors measurement in the same time frame, and thedifferent sensors data can be incorporated into the fusion frame as shown.
V. EXPERIMENTAL EVALUATIONIn this section we will apply the above-mentioned specificmethods to the quadruped robot, through the built experimentalsystem providing the suitable references truth data, we canachieve adequate experiment evaluation and reasonably reli-
TABLE I
SENSOR DATA AND STATE COMPONENTS . Sensor
Frequency position velocity orientation
IMU
400 prediction prediction prediction
Leg encoder
100 update
Foot sensor
100 update
LIDAR
10 update update
Camera
30 update able verification, also demonstrate that the performance of thealgorithm meets our requirement.
A. Experimental System
The robot simulation platform system and complex envi-ronment are mainly built by ROS and gazebo, and the visual-ization can be carried out in RVIZ. The gazebo parameters of robotstate are taken as the ground truth data.In order to provide the ground truth data in real time andaccurately, as well as further adjust and optimize the param-eters of our algorithms, a motion capture system NOKOV isselected and attached a few markers on the upper deck of therobot to record the actual pose information. The simulationand actual experiment system is datailed shown in Fig. 13.
Fig. 13. The simulation and actual experiment enviroment.
B. Experiment Error Evaluation
In order to measure the accuracy of the state estiamtion, itis necessary to calculate the dispersion degree of the relative error between the different reference value and the calculatedvalue. The root mean square error and unit distance drift canbe used for experimental evaluation. Root mean square error(RMSE) is a widely used evaluation index for velocity andattitude. It is defined as the root mean square of the absoluteerror for velocity estimates and real values between givensampling times:
RM SE = (cid:115) N (cid:80) k = | X k − X k (cid:48) | N (4)Drift per Distance Traveled (DDT) represents the evaluationindex of the change for displacement parameters. Its expres-sion is defined as the error of absolute difference betweendifferent position at given sampling times, The DDT error insingle direction is: DDT x = 100 N N (cid:80) k =1 | x k − x k (cid:48) | N − (cid:80) k =1 | x k +1 − x k | = 100 e xN − (cid:80) k =1 ∆ x (5) C. Experiment and analysis1)
Simulation Experiment for Sensor Fusion:
Keyboardis used to control the quadruped robot with a trot gait in thebuilt simulation box, then we record the gazebo measurementand obtain the fusion data for performance evaluation.The sensor fusion algorithm is tested and Fig.14 showsthe details during walking, which include the global map,simulation box enviroment, the corresponding trajectories ofdifferent localization methods for robot base-link, as well asthe comparisons of the output frequency and accuracy.
Fig. 14. Different evaluation for state estimation are shown to compare theaccuracy and frequency in simulation.
By analyzing the track accuracy of different localizationmethods, we can see the moving trails are basically identical,but the data noise of leg odometry is relatively large, whilethe sensor fusion get a satisfactory accuracy. Also this fusionmethod can further improve and keep the output frequency.Therefore, the accuracy and frequency are all be effectivelybalanced by the sensor fusion approach. Actual Experiment for Sensor Fusion:
The real exper-iment of multi-sensor fusion approach is carried out by usingthe trot gait for our quadruped robot. The remote controls the
Fig. 15. Different localization trajectories compared in RVIZ and the detailshow the various of experiment visualization. robot moves in a step of 0.2m per cycle of the mass centerwith the leg order of 1-4-2-3.Transform the different localization data to the map co-ordinate system and then output the various of trajectorycomparison during robot motion experiment, the track includ-ing IMU estimation, leg kinematic, leg odometry, laser-basedlocalization ans sensor fusion, which as shown in Fig. 15.The transformation relationship from different sensor dataand coordinate system can obtain as shown: mapbase T = mapodom T · odombase T mapimu T = mapbase T · baseimu T (6)where mapodom T can directly obtain form the initial localizationthrough camera, baseimu T present the relationship between IMUand robot base-link. Similarity, combined with the IMU andleg kinematic in map coordinate system, we can known theleg odometry in map coordinate system.We can also see the visualization result and the importantfrequency optimization. The leg odometry get the highestfrequency but the drift is obvious, while the LIDAR-basedlocalization shows a reliable track with the discontinuouslines. Combined with different data, we can further keep theaccuracy, also the frequency of sensor fusion can be promoted,as well as can ensure a smoothness trajectory.After output a reliable frequency through fusion, the moreefforts have to be focused on the accyracy for state estimation.The position data obtained by the markers is just in the motioncapture coordinate system, to further compare the accuracy ofthe robot walk trajectory and conduct the error analysis, weconsider projecting the markers data and the fusion data intoa fixed coordinate system.In order to ensure the unity of the data process and startform zero in visualization, the coordinates of the marker0is chosen when the robot in the initial state, and then thesestate quantities are converted to the initial marker0 coordinatesystem. The corresponding different coordinate systems areshown in Fig. 16.Project the measurement information into the target marker Fig. 16. The relationship between different transformation coordinate systemsduring experiment. coordinate system for the subsequent treating: bar0bar − i T = mcbar0 T − · mcbar − i T mcodom T = mcbar0 T · bar0base T · baseodom T bar0base T = mcbar0 T − · mcodom T · odombase T bar0imu T = bar0base T · baseimu T , bar0laser T = mcbar0 T − · mcodom T · odommap T · maplaser T (7)Where mcbar − i T means the transformation of marker0 rela-tive to the motion capture system in different frame. bar0base T represent the relationship between marker0 in designed boardand robot base link. The fixed coordinate transformationrelationship of robot odom and coordinate the motion capturesystem is represented as mcodom T .Fig. 18 shows the measurement results of the robot positionand orientation relate to marker0 coordinate system. Both theposition and orientaiion can match nearly to the ground truthdata by compare with our sensor fusion approach.The position in z-axis and rotate around z-axis are alway ex-ist a large noise and the unnegligible drift due to the kinematicaccumulative error as well as the unobserve characteristic onyaw direction. The position accuracy in x-axis and y-axishave been great imporved, but it have always been effectedby the computation error from the leg kinematic, as wellas the inevitable inner kinematic error from the whole bodylocomition which can be dealt with the essential movementcalibration. Another major improvement exits in pitch and roll,which are mainly taken from the pre-peocess IMU. As a result,our fusion algorithm can match the ground truth.We compute the DDT for the position error and RMSEfor the orientation error during the quadruped robot trottingfor a few time, which include the comparision of leg kine-matic (Kin), leg odometry (Kin+EKF), lidar-based localization(LIDAR) and sensor fusion (LIDAR+EKF). Also the erroraccuracy of the above different methods related to motioncapture measurements are shown in Fig. 17.The results shows that the maximum position error lessthan cm , and the angle error less than 1 degree, theseerrors are less than the performance index of the requirement.Therefore, the sensor fusion approach can effectively providestate estimation information. Fig. 17. left: DDT of the robot different position estimate for trot logs. Right:RMSE of the robot different velocity estimate for trot logs. C ONCLUSION
In this paper, we introduced a novel and modular stateestimation algorithm to balance frequency and accuracy forquadruped robot, which fuses inertial, kinematic, visual andLIDAR data by Extended Kalman Filter, also a series of ro-bust and effective preprocess approaches for different sensorsmeasurement is proposed. We have tested the fusion algorithmon our experimental StarDog robot data and a motion capturesystem is used to evaluate the performance. The proposedalgorithm can meet the requirement with consistently low errorand high frequency estimation.For future work, we will transplant the system to theonboard computer, also we intend to implement factor graphoptimization on state estimation, as well as evaluate ouralgorithm in the outdoor environment with dynamic shock.A
CKNOWLEDGMENT
This work is supported by legged robotics group (LRG) ofHarbin Institute of Technology, Shenzhen and Robot mobilityand manipulate (ROMA) lab in Southern University of Scienceand Technology. R
EFERENCES[1] C. Dario Bellicoso, Marko Bjelonic, Lorenz Wellhausen, Kai Holtmann,Fabian Gunther, Marco Tranzatto, Peter Fankhauser, and Marco Hutter.Advances in real-world applications for legged robots.
Journal of FieldRobotics , 35(8):1311–1326, 2018.[2] P. J. Besl and N. D. McKay. A method for registration of 3-dshapes.
IEEE Transactions on Pattern Analysis and Machine Intelli-gence , 14(2):239–256, 1992.[3] P. Biber and W. Strasser. The normal distributions transform: a newapproach to laser scan matching. In
Proceedings 2003 IEEE/RSJInternational Conference on Intelligent Robots and Systems (IROS 2003)(Cat. No.03CH37453) , volume 3, pages 2743–2748 vol.3, 2003.[4] Gerardo Bledt, Matthew J. Powell, Benjamin Katz, Jared Di Carlo,Patrick M. Wensing, and Sangbae Kim. MIT Cheetah 3: Design andControl of a Robust, Dynamic Quadruped Robot. In ,pages 2245–2252, 2018.[5] Gerardo Bledt, Patrick M. Wensing, Sam Ingersoll, and Sangbae Kim.Contact Model Fusion for Event-Based Locomotion in UnstructuredTerrains. In , pages 4399–4406, 2018.[6] Michael Bloesch, Marco Hutter, M H Hoepflinger, Stefan Leutenegger,and Roland Siegwart. State estimation for legged robots - consistentfusion of leg kinematics and imu. In
Robotics Science Systems , 2012.[7] C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira,I. Reid, and J. J. Leonard. Past, present, and future of simultaneouslocalization and mapping: Toward the robust-perception age.
IEEETransactions on Robotics , 32(6):1309–1332, 2016. Fig. 18. Different track comparison results for robot position and velocity.[8] Marco Camurri, Stephane Bazeille, Darwin G. Caldwell, and ClaudioSemini. Real-time depth and inertial fusion for local SLAM on dynamiclegged robots. In , pages 259–264,San Diego, CA, USA, 2015. IEEE.[9] Annett Chilian, Heiko Hirschm¨uller, and Martin G¨orner. Multisensordata fusion for robust pose estimation of a six-legged walking robot.In , pages 2497–2504, 2011.[10] Jeffrey Delmerico, Stefano Mintchev, Alessandro Giusti, Boris Gromov,and Davide Scaramuzza. The current state and future outlook of rescuerobotics.
Journal of Field Robotics , 36(7), 2019.[11] Maurice F. Fall´on, Matthew Antone, Nicholas Roy, and Seth Teller.Drift-free humanoid state estimation fusing kinematic, inertial andLIDAR sensing. In , pages 112–119, 2014.[12] Shinpei Kato, Eijiro Takeuchi, Yoshio Ishiguro, Yoshiki Ninomiya,Kazuya Takeda, and Tsuyoshi Hamada. An Open Approach to Au-tonomous Vehicles.
IEEE Micro , 35(6):60–68, 2015.[13] Jeremy Ma, Max Bajracharya, Sara Susca, Larry Matthies, and MattMalchano. Real-time pose estimation of a dynamic quadruped in GPS-denied environments for 24-hour operation.
The International Journalof Robotics Research , 35(6):631–653, 2016.[14] Jeremy Ma, Sara Susca, Max Bajracharya, Larry Matthies, MattMalchano, and Dave Wooden. Robust multi-sensor, day/night 6-DOF pose estimation for a dynamic legged vehicle in GPS-deniedenvironments. In , pages 619–626, St Paul, MN, USA, 2012. IEEE.[15] Simona Nobili, Marco Camurri, Victor Barasuol, Michele Focchi, andMaurice Fallon. Heterogeneous Sensor Fusion for Accurate StateEstimation of Dynamic Legged Robots. In
Robotics Science andSystems , 2017.[16] Simona Nobili, Raluca Scona, Marco Caravagna, and Maurice Fallon.Overlap-based ICP tuning for robust localization of a humanoid robot.In , pages 4721–4728, Singapore, Singapore, 2017. IEEE.[17] Tong Qin, Peiliang Li, and Shaojie Shen. Vins-mono: A robust andversatile monocular visual-inertial state estimator.
Robotics, IEEETransactions on , 34(4):1004–1020, 2018.[18] Marc Raibert, Kevin Blankespoor, Gabriel Nelson, and Rob Playter.BigDog, the Rough-Terrain Quadruped Robot.
IFAC ProceedingsVolumes , 41(2):10822–10825, 2008.[19] C. Semini, J. Buchli, M. Frigerio, T. Boaventura, and D. G. Caldwell. Hyq - a dynamic locomotion research platform. In
Int.l Workshop onBio-Inspired Robots , 2011.[20] Tixiao Shan and Brendan Englot. Lego-loam: Lightweight and ground-optimized lidar odometry and mapping on variable terrain. In
IEEE/RSJInternational Conference on Intelligent Robots and Systems (IROS) ,pages 4758–4765. IEEE, 2018.[21] Joan Sola. Quaternion kinematics for the error-state kalman filter. arXivpreprint arXiv:1711.02508 , 2017.[22] Vignesh Sushrutha Raghavan, Dimitrios Kanoulas, Chengxu Zhou, Dar-win G. Caldwell, and Nikos G. Tsagarakis. A Study on Low-Drift StateEstimation for Humanoid Locomotion, Using LiDAR and Kinematic-Inertial Data Fusion. In , pages 1–8, Beijing, China, 2018.IEEE.[23] Paweł Wawrzy´nski, Jakub Mo˙zaryn, and Jan Klimaszewski. Robustestimation of walking robots velocity and tilt using proprioceptivesensors data fusion.
Robotics and Autonomous Systems , 66:44–54, 2015.[24] David Wisth, Marco Camurri, and Maurice Fallon. Robust Legged RobotState Estimation Using Factor Graph Optimization.
IEEE Robotics andAutomation Letters , 4(4):4507–4514, 2019.[25] Shuo Yang, Hans Kumar, Zhaoyuan Gu, Xiangyuan Zhang, MatthewTravers, and Howie Choset. State Estimation for Legged Robots UsingContact-Centric Leg Odometry. arXiv:1911.05176 [cs, eess] , 2019.[26] Ji Zhang and Sanjiv Singh. Low-drift and real-time lidar odometry andmapping.