Sensor-Movement-Robust Angle Estimation for 3-DoF Lower Limb Joints Without Calibration
Chunzhi Yi, Feng Jiang, Zhiyuan Chen, Baichun Wei, Hao Guo, Xunfeng Yin, Fangzhuo Li, Chifu Yang
11 Sensor-Movement-Robust Angle Estimation for3-DoF Lower Limb Joints Without Calibration
Chunzhi Yi, Feng Jiang ∗ , Zhiyuan Chen, Baichun Wei, Hao Guo, Xunfeng Yin, Fangzhuo Li, and Chifu Yang Abstract —Inertial measurement unit (IMU)-based 3-DoFangle estimation methods for lower limb joints have beenstudied for decades, however the calibration motions and/orcareful sensor placement are still necessary due to challengesof real-time application. This study proposes a novel sensor-movement-robust 3-DoF method for lower-limb joint angleestimation without calibration. A realtime optimization process,which is based on a feedback iteration progress to identify threejoint axes of a 3-DoF joint, has been presented with a referenceframe calibration algorithm, and a safe-guarded strategy isproposed to detect and compensate for the errors caused bysensor movements. The experimental results obtained froma 3-DoF gimbal and ten healthy subjects demonstrate apromising performance on 3-DoF angle estimation. Specially,the experiments on ten subjects are performed with threegait modes and a 2-min level walking. The root mean squareerror is below 2 deg for level walking and 5 deg for othertwo gait modes. The result of 2-min level walking shows ouralgorithms stability under long run. The robustness againstsensor movement are demonstrated through data from multiplesets of IMUs. In addition, results from the 3-DoF gimbalindicate that the accuracy of 3-DoF angle estimation couldbe improved by 84.9 % with our reference frame calibrationalgorithm. In conclusion, our study proposes and validatesa sensor-movement-robust 3-DoF angle estimation for lower-limb joints based on IMU. To the best of our knowledge, ourapproach is the first experimental implementation of IMU-based 3-DoF angle estimation for lower-limb joints withoutcalibration. Index Terms —Analytical-based Calibration, Absolute Ori-entation Estimation Error, Biomedical measurement, Errorcompensation, Inertial Measure Unit (IMU), Lower-Limb JointAngle Estimation, Root Mean Square (RMS), Self-aligned,Sensor-Movement-Robust, Three Degree Of Freedom (3-DOF),3-DOF Gimbal.
I. I
NTRODUCTION T HE need for real-time human motion measuring inpathological human movement analysis [1], stabilityevaluation of locomotion [2], virtual reality systems [3],and human-robot interaction [4] has driven researchers todevelop novel tracking techniques. Indoor motion capture
C.Yi, H.Guo and C.Yang are with the School of Mechatronics Engineer-ing, Harbin Institute of Technology, Harbin, Heilongjiang, 150001 Chinae-mail: [email protected](C.Yi).Z.Chen is with the School of Computer Science, University of Notting-ham, MalaysiaX.Feng and F.Li are with the School of Mechanical Engineering,HarbinEngineering University, Harbin, Heilongjiang, China.B.Wei and F.Jiang are with the School of Computer, Harbin Institute ofTechnology, Harbin, Heilongjiang, 150001 China and Pengcheng Labora-tory, Shenzhen, Guangdong China e-mail: [email protected](F.Jiang). systems (e.g. optical motion capture systems[5] and mag-netic resonance systems based on imaging methods[6],[7]),which utilize image processing techniques, are accurateenough to be a gold standard. However, it requires controlledlaboratory settings, trained staff and costly facilities, suchfatal flaws limit their use in real-time application scenarios.For real-time application, inertial measurement units(IMU) with multi-axis gyroscopes, accelerometers and mag-netometers were widely used to estimate hip, knee and ankleangles [8], [9] and [10]. To estimate 3-Degree-of-Freedom(3-DoF) angles for lower-limb joints, the IMU-based angleestimation algorithm can be decoupled to two steps. The firststep is to estimate the absolute orientation of IMUs. Thedata measured by an IMU are described as local vectorsin a sensor-fixed frame, [ s i ] . Herein, orientation trackingtehniques were developed to transform the vectors from [ s i ] to the earth frame in order to describe all the measurementsin the same coordinate frame. Having obtained the absoluteorientations of the IMUs placed on two adjacent segments,the second step is to infer joint angles by developing body-fixed frames using biomechanical constraints.In this research, in order to develop a novel sensor-movement-robust based 3-DoF method for lower limb jointangle estimation without calibration, challenges in both stepsneed to be solved. Firstly, due to the diverse characteristicsof measurements from each IMU, the absolute orientationsof IMUs estimated by orientation tracking techniques areactually described in different reference frames [11],[12],rather than in the earth frame. This will definitely leadto a large error in the resulted angle estimation. Currentlyused methods for calibrating reference frames suffer fromeither a linear approximation of ”the time-varying deviation”between reference frames [11] or a rough calibration metrics[12]. Thus, a calibration method, which is able of providinga more accurate compensation for the reference frame de-viation needs to be developed with comprehensive metrics.In addition, most works of the sensor-to-body alignment for3-DoF lower-limb joints depend on functional calibrationpostures. Other than being cumbersome, such postures couldresult in additional errors if subjects cannot perform thepostures accurately. Thus, it is necessary to develop a methodfor estimating 3-DoF joint angles without using functionalcalibration postures. Moreover, when muscles, on whichthe IMUs are mounted, oscillate severely, or even whensensors are moved with respect to their mounted segments,how to detect the movements and re-align IMUs is still a a r X i v : . [ ee ss . SP ] O c t challenge under real-application scenarios. Online detectionand safeguarded strategy against sensor movements shouldbe included in the algorithm.Aiming at solving the problems above, we developeda sensor-movement-robust algorithm for estimating 3-DoFangles lower-limb joints without using calibration postures.The main contributions of this paper are summarized asfollow: • To the best of our knowledge, this is the first study torealize a realtime detection and correction for sensormovements during the progress of estimating lowerlimb joint angles. • A novel 3-DoF geometric constraint of lower-limbjoints was proposed , with which joint axes can be fur-ther estimated without employing functional calibrationpostures. • A pointwise calibration method, which utilized thefused calibrations of magnetic field and gravity vectors,was proposed to effectively overcome the problem oftime-varying deviation between reference frames andensure the accuracy of the whole algorithm.This paper has been organized as follow: Related worksare presented in Section II. Section III details the method-ology and experiments have been explained in Section IV.Results and discussions of the entire research work havebeen given in Section V. Section VI concludes the wholestudy. II. R
ELATED W ORK
Although the IMU-based angle estimation technique hasbeen studied for decades, there are still severe challengesfor estimating 3-DoF lower-limb joint angles for real-timeapplication. One challenge is that the second step of cur-rently used techniques, developing body-fixed frames, stillsuffer from either a set of cumbersome predefined func-tional calibration procedures or carefully aligned body-to-sensor relationship usually associated with body landmarksdefined by the International Society of Biomechanics (ISB)recommendations [13]. In [8] and [9], the coordinate axesof sensor-fixed frames were assumed to be collinear withjoint axes, such an assumption makes the accuracy heavilydepend on IMUs alignment. Picerno et al.[10] proposedan analytical-based calibration method using a calibrationdevice to distinguish body landmarks. Similarly, severalcalibration procedures were proposed to transform sensorsmeasurement into orientations of bone-embedded anatomicalframes[14–17]. However, it was reported that the repeatabil-ity of analytical-based calibration methods was obviouslyworse than functional calibration[18, 19].Other works [20–22] proposed various functional calibra-tion procedures , in which subjects were asked to performa set of pre-determined tasks to define each orientation ofbiological axes with respect to sensor-fixed frames. Func-tional calibration presented robustness towards the executionof calibration movements[15] and the load applied to the joint[19, 23]. A functional calibration with passive move-ments was introduced by Favre et al.[24] in order to improverobustness. Recently, Valencia et al.[25] proposed an IMU-to-body alignment method where an initial upright postureneeds to be performed by subjects to align a coordinateaxis with gravity. But some additional strategies, such asaccurate sensor placement or standing along the direction ofgeomagnetic field, still need to be adopted to align other twocoordinate axes.In order to avoid the need for such calibration proceduresfor 3-DoF joint angle estimation, some related work inestimating angles of joints with fewer DoFs does giveencouragement. Seel et al.[26] proposed a 1-DoF joint angleestimation method for knee and ankle, which fused theintegration of angular rate along joint axis and the inclinationof accerlation based on exploring the kinematic constraintsof a hinge joint. In [27], Muller et al. extended Seel’s worktowards 2-DoF joints, in which the absolute orientations ofIMUs were required.It should be noted that among all the previous work inestimating joint angles for lower-limb joints, none of thepublished work was proven to be robust against the sensormovements after calibration procedures. Although Mullerand Seel’s work were reported to be robust against skinartificial movements, no published work have experimentallyvalidated how to detect and compensate for large sensormovements.Another challenge is that the accuracy of estimatingIMUs’ absolute orientation has huge effect to the perfor-mance of angle estimation. The error of estimating IMUs’absolute orientations with motion tracking algorithms willmake the vectors described in sensor-fixed frames intoreference frames rather than the earth frame. The referenceframes of IMUs mounted on different segments vary dueto IMUs’ individual signal corruptions. Brennan et al.[11]validated to what extent the absolute orientation estimationerror could lead to the accuracy reduction when estimating3-DoF angle. Therefore, in our research work, in order toreduce the error caused by the deviation between the tworeference frames, a reference frame calibration algorithmshould be included in 3-DoF joint angle estimation. Favreet al.[28] defined gravity as the Z axis of each referenceframe. The deviation between two reference frames wascalibrated by uniforming the projected angular rates of eachIMU in the horizontal plane while the subjects were toperform a hip abduction and adduction. However, due tothe changing of magnetic distortion and movement-causedacceleration, the deviation between two reference framesis time-varying. The calibration in this work cannot beupdated over time because of the calibration posture. Basedon Favre’s single calibration method, Brennan et al.[11]performed the calibration procedure before and after datacollection in order to linearly interpolate the calibrationangle by time. The accuracy represented by the root meansquare errors indicated relatively large errors of such approx-imation. Vitali et al. [12] proposed a calibration algorithm for estimating 3-DoF knee angle to dynamically calculate a”correction” direction cosine matrix, which assumes that theflexion/extension axis estimated by Seel’s 1-DoF joint angleestimation method [26] should be the same in referenceframes. Although Vatali’s work provides some encourage-ment, the 1-DoF joint axis estimated by [26] is based on thehinge-joint approximation, which is not acccurate enough on3-DoF lower-limb joints. Thus, a comprehensive metrics ofdeviation between the reference frames should be given.Motivated by the challenges from the literature review, wepresented how accurately 3-DoF angle of lower-limb jointscan be estimated without performing pre-defined postures orcareful alignment. A novel numerical optimization-based es-timation strategy has been proposed where an algorithm wasembedded to detect sensor movements and compensate theconsequent errors. In our experiments, IMU signals recordedfrom ten healthy subjects and a 3-DoF gimbal were used torespectively evaluate the potential of the designed 3-DoFangle estimation algorithm and the errors from each step ofour algorithm have been analyzed separately. The promisingresults of this study will aid the future development of IMU-based motion measuring for gait-related application.III. M
ETHODOLOGY
In this research, a sensor-movement-free estimation of theorientation relationship between two segments beside a 3-DoF joint without introducing calibration movements hasbeen proposed. More specifically, a point-wise referenceframe calibration combining with an orientation trackingalgorithm is firstly used to estimate IMUs’ absolute ori-entations in the same reference frame. Then the biologicalaxes of lower-limb joints are estimated through geometricconstraints of each joint to develop alignment between body-fixed frames and sensor-fixed frames. In this section, theprinciple of our algorithm is presented separately in III-Aand III-B. The implementation is concretely described inIII-C.
A. Calibration for Two Reference Frames
Firstly, the absolute orientation of each IMU is calculatedby an improved complementary filter proposed in our previ-ous work[29] to dynamically fuse readings from gyroscope,accelerometer and magnetometer.As stated above, due to the different data characteristics ofsignals of the two IMUs mounted on each segment, the abso-lute orientations of both IMUs are described in two differentreference frames [ g ] and [ g ] , which can be representedby quaternions q g s , q g s . To compensate for errors causedby the difference between two reference frames, a commonreference frame should be employed, which is namely toestimate a correction quaternion from [ g ] to [ g ] .Without including movement-caused acceleration, the ac-celeration vectors, a and a , should be identical in the common reference frame, which contributes to the construc-tion of the correction quaternion, given by: q acc = cos( θ acc / · + sin( θ acc / · (cid:20) W acc (cid:21) W acc = ( q g s ⊗ a ) × ( q g s ⊗ a ) (1) θ acc = q g s ⊗ a × q g s ⊗ a (cid:107) ( q g s ⊗ a ) × ( q g s ⊗ a ) (cid:107) Where W acc is the rotation axis, θ acc is the rotation angleand q acc is the correction quaternion calculated from gravity.Similarly, the identical magnetic field vectors m and m without magnetic distortion could contribute to anothercorrection quaternion, given by: q mag = cos( θ mag / · + sin( θ mag / · (cid:20) W mag (cid:21) W mag = ( q g s ⊗ m ) × ( q g s ⊗ m ) (2) θ mag = q g s ⊗ m × q g s ⊗ m (cid:107) ( q g s ⊗ m ) × ( q g s ⊗ m ) (cid:107) However, due to the corruption of acceleration and mag-netometer readings, neither of the correction quaternions cal-culated above are accurate enough to represent the rotationalrelationship between [ g ] and [ g ] . Herein, a weighted sumof these two correction quaternions is used to make a fusion. q corr = k mag · q mag + k acc · q acc (3)To calculate the angle of a 3-DoF joint, the detaileddescription of geometric constraint will be presented, whichwill start from the 1-DoF joint and then extending to 3-DoFjoint. B. Sensor-to-Body Alignment Based on Geometric Con-straints1) Geometric Constraints of a 1-DoF Joint:
For lower-limb joints, there is always a main axis around which thejoint rotates the most time during gait cycles. The mainaxis usually corresponds to flexion/extension for hip andknee, and dorsiflexion/plantarflexion for ankle. Simplifyinga lower-limb joint as a hinge joint, the main axis couldbe estimated through its geometric constraint based on themethod introduced by Seel et al.[26], which is given by: (cid:107) ω × j D (cid:107) − (cid:107) ω × j D (cid:107) = 0 (4)Where ω , ω are angular rate vectors of each segment withrespect to the sensor-fixed frames [ s ] and [ s ] respectively,and j D , j D are the same unit joint axis but described in [ s ] and [ s ] respectively. According to [26], the geometric constraint of joint posi-tion vectors can be exploited as: (cid:107) a − ˙ ω × o − ω × ( ω × o ) (cid:107)−(cid:107) a − ˙ ω × o − ω × ( ω × o ) (cid:107) = 0 (5)Where a , a and ω , ω are readings of accelerometersand gyroscopes mounted on each segment, respectively. o and o are vectors from the origin of each sensor frame tothe rotating center.According to the construction of rotation matrix, R b i s i thatrepresents the rotation matrix between body-fixed frame [ b i ]and sensor-fixed frame [ s i ] is defined by the x b i s i coincidentwith the joint axis, which is assumed to be vertical to thesagittal plane. x b i s i = j Di , i = 1 , (6) z b i s i = x b i s i × o i (cid:107) x bisi × o i (cid:107) , y b i s i = z b i s i × x b i s i (7) R bisi = (cid:2) x b i s i y b i s i z b i si (cid:3) (8)
2) Geometric Constraints of a 3-DoF Joint:
Estimatingthe main axis through the method presented above, althoughstraightforward, suffers from the perturbation caused byrotation around other axes, which will consequently leadto relatively large errors and even divergence. Besides,angles of other DoFs are also of importance for gait-related researches. Herein, a general condition is consideredto develop geometric constraints of a 3-DoF joint as anextension of the abovementioned method. Considering twosegments connected by a 3-DoF joint, the rotation of asegment relative to the other one can be decoupled into threesequent rotations around three axes. As shown in Fig. S1 , j D and j D denote the axes fixed on each segmentrespectively, while j D denotes the main axis. In orderto obtain a further relaxed constraint on 3-DoF lower-limbjoint, the assumptions are organized as follow: • The main axis, j D , is perpendicular to the other twoaxes, which meets the defination of ISB [13]. • As adopted by [27], either of the other two axes j D and j D prossesses a fixed relative orientation with itscorresponded segment (i.e. j D - segment 1, j D -segment 2). If sensor movements are not considered,the coordinates of j D and j D are fixed in [ s ] and [ s ] respectively (i.e. j D - [ s ] , j D - [ s ] ).To unify the description of these axes, a transitional coor-dinate frame, the reference frame [ g ], should be introducedto describe all the data from both IMUs mounted on bothsegments. The relationship between angular rates of twosegments is given by: [ ω ] g − [ ω ] g = ω j [ j D ] g + ω j [ j D ] g + ω j [ j D ] g (9) It should be noted that all the figures and tables named as S i arepresented in the Supplementary information. Where ω , ω represent two vectors of angular rate de-scribed in local sensor-fixed frames, ω j , ω j and ω j represent scalar angular rates of each joint axis, j D , j D and j D represent three unit joint axes described in sensor-fixed frames, [ ] g denotes the description of a vector inthe reference frame. Multiplying [ j D ] g and [ j D ] g on bothsides of the equation, equation (9) could be transformed into: f ( j D , j D ) = ([ ω ] g − [ ω ] g ) · ([ j D ] g × [ j D ] g ) − ω j [ j D ] g · ([ j D ] g × [ j D ] g ) = 0 (10)One of the joint axes j D is known as the same vectoras the main axis, j D and j D . Given that j D isperpendicular to the other two axes, the scalar angular rate ω j is equal to the projection of [ ω ] g − [ ω ] g on this axis,which is: ω j = ([ ω ] g − [ ω ] g ) · [ j D ] g (11)It could be seen that estimating j D , j D and j D hasno need for a specific placement, which can be very usefulwhen orientations of IMUs towards segments are unknown.After obtaining the coordinates of j D and j D in sensor-fixed frames, a sensor-to-body alignment could be achievedby determining the orientation of body-fixed frames. θ b s = arccos( (cid:2) (cid:3) (cid:48) , j D ) q b s = cos (cid:16) θ b s (cid:17) sin (cid:16) θ b s (cid:17) · ( (cid:2) (cid:3) (cid:48) , j D ) θ b s = arccos( (cid:2) (cid:3) (cid:48) , j D ) (12) q b s = cos (cid:16) θ b s (cid:17) sin (cid:16) θ b s (cid:17) · ( (cid:2) (cid:3) (cid:48) , j D ) C. Algorithm Implementation
Based on the product of the rotation matrixes and quater-nions calculated above, the orientation relationship betweentwo body-fixed frames can be estimated. As shown in Fig.S2, some details still need to be implemented in order tocomplete the whole algorithm. In this section, details will bediscussed by explaining each part of the proposed algorithm.
1) The Feedback-based Iteration Progress of Calculat-ing 3-DoF Joint Axes:
Considering the existence of mainrotation in each lower-limb joint, the accuracy of 3-DoFangle estimation can be improved by applying geometricconstraints of 1 DoF joint and 3-DoF joint iteratively. Acomplete iteration is presented in Fig. 1. The main axis de-scribed in each sensor-fixed frame, simplifying a 3-DoF jointas a hinge joint, is estimated using 1-DoF joint geometricconstraints. Then, plugging the estimated main axis as theknown third axis into equation (10) and (11), the geometricconstraint of a 3-DoF joint, the other two axes described ineach sensor-fixed frame, j D and j D , can be calculated torepresent the rotation axes of abduction and inner rotation. Fig. 1. The diagram of feedback-based iteration progress
It is shown in Fig. 1 that angular rates, ω up and ω lo , measured by each IMU beside the joint are subtracted bytheir projection on each axis respectively to reduce the errorsof the main axis estimation caused by additional rotations,given by: ω up = ω up − ( ω (cid:48) up · j D ) · ω up ω lo = ω lo − ( ω (cid:48) lo · j D ) · ω lo (13)The updating of angular rates constructs a feedback, whichcontributes to an iteration progress. With the increasing itera-tion times, the accuracy of estimation could be consequentlyimproved, however, with the price of increasing computingtime. Given that the tradeoff between computing efficiencyand accuracy, the number of iteration is limited to 6 in orderto maintain an acceptable accuracy.
2) Numerical Optimization Methods:
As stated in sec-tion III-B, the geometric constraints of anatomical axesare described as a problem of minimizing cost functions.Utilizing the Gauss-Newton method as presented in [26], thecoordinates of j D and j D , o and o could be obtainedwithin several iterations. Regarding j D as the same vectoras j D or j D , equation (10), working as the cost function,could be used to calculate the coordinates of other twoaxes j D and j D . However, unlike the calculation of j D and j D , Jacobian of derivative-based methods is notstraightforward enough to be constructed because j D and j D can also be treated as a function of j D and j D .To this end, a secant version of the Levenberg-Marquardtmethod is applied in this scenario [30]. Following this secantversion, the coordinates of j D and j D could be calculatedby minimizing cost function f ( j D , j D ) .In addition to joint axis calculation, a calibration of tworeference frames, combined with q g s and q g s estimation, isalso embedded in the iteration progress shown in Fig. 1.The performance of q corr is determined by its two fusioncoefficients k mag and k acc . For quantification purposes, thedeviation of [ j D ] g and [ j D ] g is introduced here toevaluate to what extent coordinates in [ g ] is rotated by q corr into those in [ g ] , given by: [ j D ] g = q g s ⊗ j D [ j D ] g = q g s ⊗ q corr ⊗ j D (14)Regarding f i ( k mag , k acc ) = (cid:107) [ j D ] g − [ j D ] g (cid:107) asthe cost function, fusion coeficients, k mag and k acc , canbe estimated through the secant version of L-M method.Hence, the calibration of reference frames synthesizes theinformation from local magnetic field, gravity and the mainaxis of the 3-DoF joint, which results in a comprehensivemeasuring. Fig. 2. Data windowing scheme
For the sake of real-time calculation, a sliding window ( W n ) is chosen to segment the measurements from eachsensor such that the axes’ coordinates could be estimated forthe last N measurements. As shown in Fig. 2, two slidingwindows are separated by an interval ( I n ) within which thecoordinates of joint axes estimated in the last sliding windoware used to calculate joint angles.
3) Detection and Safeguarded Strategy against SensorMovement:
No published work presented how to detect andre-align IMUs (sensor-fixed frames) to body-fixed frameswhen IMUs are moved with respect to the segments theyare mounted on. Traditionally, the only solution to sensormovements is to terminate the data collection and per-form calibration procedures again. Seel and Muller’s algo-rithms, developed for estimating 1-DoF angle and 2-DoFangle respectively, could gradually update the coordinatesof anatomical axes based on sliding windows. However, themeasurements before sensor movements could no longer beused to estimate axes and angles. So a gradual update is notpractical enough especially when the estimated angles areused as input to other applications.In order to solve this problem, a comparison betweenoutputs of each iteration is presented to ensure the variety of j D , j D and j D is under a boundary. A metric of suchvariety is given by: V = (cid:88) i =1 v i (cid:107) ( j Di ) iter − ( j Di ) det (cid:107) (15)Where ( j Di ) iter denotes the coordinates of each axis outputby the iteration progress, ( j Di ) det denotes the axes esti-mated for detection, v i is the weight for the i th axis. Becausethe main axis varies slowly under normal conditions, its weight v is set to . while v and v are set to . respectively. The thresholds for judging sensor movementsare 1.5 for hip and knee, 2.0 for ankle. During an interval I n , estimation algorithm still works for estimating ( j Di ) det with 50 measurements. When V is detected to be largerthan a threshold, the calculation during this interval will beterminated. And a new sliding window will then be initiatedto update estimates of axes, during which the axes ( j Di ) det are used to estimate angles. Fig. 3. The detection and safeguarded strategy for sensor movement
4) Angle estimation:
Based on the three axes estimatedby the whole iteration, orientation relationship R b b betweentwo segments (i.e. two body-fixed frames) can be con-structed. As for a 1-DoF joint, the rotational matrix R b b can be available by multiplying the matrixes representingthe orientations of each frame. R b b = R s b · R g s · ( R g g ) − · R s g · R b s (16)Where R s b , R s g are the inverse of R b s and R g s respectively, R g g is the rotation matrix conversed from q corr . Transform-ing R b b into a unit quaternion q b b , the angle around the mainaxis ∠ j , can be calculated as: ∠ j = 2 · cos − ( q b b (1)) (17)For a 3-DoF joint, the core step of calculating innerrotation and inversion angles is to eliminate the rotationaround the main axis. The calculation of such angles dependson decoupling q b b into Euler angles.During the decouplingprogress, the rotation around j D will be decoupled intoadditional rotations around j D and j D due to the dis-cordance of j D and the sencond rotation axis of Euler-angle decoupling. To eliminate such additional rotations, aquaternion that represents the rotation around j D could beconstructed in reference frame [ g ] . q j = (cid:20) cos (cid:0) ∠ j )sin (cid:0) ∠ j ) · [ j D ] g (cid:21) (18)By multiply the inverse of q j , a pseudo orientationrelationship between body-fixed frames, ˜ q b b can be describedas: ˜ q b b = ( q b s ) − ⊗ q g s ⊗ q corr ⊗ ( q j ) − ⊗ ( q g s ) − ⊗ q b s (19) Thus, according to equation (12), the angles around j D and j D could be estimated by decoupling ˜ q b b into asequential rotation of X and Z axes. Algorithm 1
Input: ω , ω , a , a , m , m Output: ∠ j , ∠ j , ∠ j , j D , j D , j D , o , o initialize j D , j D , j D , o , o while sliding windows do for i = 0 → do subtracting angular rates using Eq. (13) estimating the main axis using Eq. (4) calibrating reference frames using Eq. (2), (3),(14) obtaining ω j using Eq. (11) and [ j D ] g = [ j D ] g estimating 3-DoF joint axes using Eq. (10) end for return [ j D ] g , [ j D ] g and [ j D ] g end while while intervals do result and [ j Di ] det ← S EN M OVE ( ω up , ω lo ) if result == 1 then STOP intervals do step − do step − using [ j Di ] det else estimating joint position vectors using Eq. (5) calculating R b b using Eq. (6) - (8) and (16) infering q j using Eq. (17) and (18) infering ˜ q b b using Eq. (19) decoupling XYZ Euler angles of ˜ q b b : ∠ j ← θ X , ∠ j ← θ Z end if return ∠ j , ∠ j and ∠ j end while function S EN M OVE ( ω up , ω lo ) result ← [ j Di ] det ← do 2-8 in detection windows calculating V using Eq. (15) if V < threshold then result ← else result ← end if return result and [ j Di ] det end function IV. E
XPERIMENT
A. Validation Protocol1) 3-DoF Gimbal:
To validate the effectiveness of ref-erence frame calibration separately, a gimbal consists oftwo segments and three rotating axes intersecting at thesame point was designed to mimic a 3-DoF lower-limb joint. As shown in Fig. 4, angles directly measured byHall sensors attached to each axis by couplings, were usedas reference to quantify our algorithm’s performance. FourIMUs were attached to each segment of the gimbal. IMU2and IMU3 were attached to the segments beside a 1-DoFjoint whose axis was set as the main axis, while IMU1and IMU4 were placed beside the 3-DoF joint. Due to itsflat mounting surface, IMUs can be mounted with knownorientation relative to body-fixed frames, which provideda reference for the estimation of axes. The gimbal wasactivated manually, while the largest motion was guaranteedto be around the main axis. (a) The 3-DoF gimbal(b) The IMU attachment on human subjectsFig. 4. Experimental setup
2) Human subject:
For the purpose of validation onhuman subjects, ten healthy subjects( 23 ± ± ± B. Data Analysis
Prior to data processing, the raw data from IMUs werefiltered by a low pass filtering method, and then methodproposed by Feliz et al [31] had been adopted to resetthe angular rate to zero when the angular rate was under1 rad/s . The bias of accelerometer and magnetometerreadings were evaluated by the algorithm proposed in [20]and subtracted from the data.During the iteration of the whole algorithm, a relativelylarge error might come from 1) the estimation of axesusing geometric constraints, 2) the calibration of referenceframes. To distinguish the sources of errors, experimentswere performed on the gimbal, with IMUs being placed withproper orientations relative to gimbal axes. Hence, relativeerrors caused by misestimation of joint geometric constraintscan be completely avoided by proper alignment.Data fromhuman subjects were collected and processed for validatingthe whole algorithm’s performance on lower-limb joints, andthe robustness towards different sensor placement and sensormovements.
1) Effectiveness of the reference frame calibration:
Tosimply analyze the effectiveness of reference frame cali-bration, error of estimating joint axes and position vectorsshould be eliminated. To do this, Z axis of IMU2 and X axis of IMU3 were placed in the direction of axis j D .In addition, structural parameters of the gimbal were usedto calculate reference coordinates of joint position vectors.Thus, error caused by the deviation of reference frames canbe seen as the only reason that contributes to the deviationbetween measured and estimated angle. The root meansquare error (RMSE) between the measured 1-DoF jointangle( θ measured ) and estimated angle with ( θ + known ) andwithout ( θ − known ) calibrating reference frames was used toquantify the effectiveness of the reference frame calibration. error DoFRF C ( i ) = θ measured ( i ) − θ + / − known ( i ) RM SE DoFRF C = (cid:118)(cid:117)(cid:117)(cid:116) N N (cid:88) i =1 ( error DoFRF C ( i )) (20)
2) Validity of axis estimation:
To validate the effec-tiveness of axis estimation for 3-DoF joints, an accuracycomparison needs to be made in 1-DoF axis estimationand the feedback-based iteration for estimating 3-DoF axes.Using data collected from IMU2 and IMU3, the main axis j D , j D , and joint position o , o of the 1-DoF jointcan be estimated assuming that the alignment of sensors isunknown. Then, RMSE is used to measure the performanceof the algorithm, which combines estimating j D , j D , o , o and calibrating reference frames. To present howestimates of such axes drift, the deviation between the esti-mated coordinates of j D , j D , o , o and their referencecoordinates known by IMUs’ determined placement is usedas another metic. error DoFGAE ( i ) ± = θ DoFmeasured ( i ) − θ DoFestimated ( i ) ± RM SE DoFGAE ( θ ± ) = (cid:118)(cid:117)(cid:117)(cid:116) N N (cid:88) i =1 ( error DoFGAE ( i ) ± ) (21) error ( j Di ) ± = (cid:107) [ j ref ] − ( j Di ) ± (cid:107) , i = 1 , error ( o i ) ± = (cid:107) [ o i ] − ( o i ) ± (cid:107) , i = 1 , (22)Where [ j ref ] , [ o i ] are the reference coordinates of the mainaxis and sensor position described in the coordinate frameof the i th IMU, ± denotes the estimates with and withoutreference frame calibration respectively.Similarly, as a performance indicator of 3-DoF angleestimation, RMSE could also be a metric of the feedback-based iteration algorithm, which can be identified as follow: RM SE DGAE (cid:0) ( j Di ) ± ) = (cid:118)(cid:117)(cid:117)(cid:116) N N (cid:88) i =1 µ µ = θ j Di measured ( i ) − θ j Di estimated ( i ) ± , i = 1 , , (23)Reference coordinates of the three axes, known by theplacement of IMU1 and IMU4, can also provide a measure-ment for the estimates of such axes, given by: error ( j Di ) ± = (cid:107) [ j refi ] − ( j Di ) ± (cid:107) ,i = 1 , , j = 3 , (24)Due to the varying coordinates of j D relative to IMU1and IMU2, error ( j D ) ± is computed at the end of eachsliding window.
3) Accuracy and Agreement of human lower-limb jointangle estimation:
To access the accuracy of the anglesestimated by our algorithm, RMSE is calculated among allthe lower-limb joints using estimated angles and referenceangles measured by optical motion capture.
RM SE ji = (cid:118)(cid:117)(cid:117)(cid:116) N N (cid:88) k =1 ( θ ji ( k ) − ˆ θ ji ) i = F E, AB, IR.j = ankle, knee, hip (25)
4) Repeatability of human lower-limb joint angle estima-tion:
To evaluate the robustness of our algorithm against sen-sor placement, the repeatability is estimated by comparingRMSEs of estimated angles between different sensor place-ments. To do so, multiple IMUs were placed on the thighand shank and data from different sets of IMUs were usedto construct a comparative trial i.e. IMU1-IMU2, IMU1-IMU3, IMU2-IMU4, IMU3-IMU5, IMU4-IMU6, IMU5-IMU6. Then repeatability could be estimated by Bland-Altman method, which provides an interval where the errorsfall with a 95% probability [32].
5) Influence caused by different lengths of sliding win-dows and intervals:
To test the influence caused by differentlengths of sliding window and interval, the RMSE anditeration duration were calculated with 5 sets of slidingwindows and 6 sets of intervals to present their effect onaccuracy and computational efficiency.
6) Robustness against sensor movements:
An extremecondition was constructed to validate the effectiveness ofsensor movement detection and safeguarded strategy. Datafrom IMUs besides hip joint during stair ascent were usedas an example in this test. When data from IMU1 andIMU2 were used for estimating 3-DoF hip angles duringupstairs, a piece of data from IMU3 was injected intodata flow to replace data measured by IMU2 in the sameduration. RMS errors were calculated separately to presentthe accuracy before detection, during detection and afterdetection.The algorithm in [25], which is proposed recentlyfor estimating 3-D lower-limb joint angles, was employedto make a comparison with our algorithm.
7) Accuracy under a 2-min test:
In order to demonstrateour algorithm’s performance over long runs, data from the2-min test on level walking were processed. Under thistest, no sensor movement was involved. RMS errors, whichquantified the accuracy, were averaged among subjects.V. R
ESULTS AND D ISCUSSION
A. Error Sources of 3-DoF Angle Estimation
As shown in Fig. 5, the curves of 1-DoF joint angleare calculated through known versus estimated (denoted by θ known and θ estimated , respectively) coordinates of j D , j D , o , o , and with versus without (denoted by + and − , respectively) calibrating reference frames. The root meansquare errors, RM SE DoFRF C , are 0.32 deg ( + ) and 3.24 deg ( − ), while RM SE DoFGAE , are 2.06 deg ( + ) and 4.17 deg ( − ). The RM SE DoFRF C without calibration meets the errorreported in [29].
Fig. 5. Estimated 1-DoF angles around the axis j D TABLE IE
RRORS OF O F JOINT AXIS COORDINATES
Name error ( j D ) error ( j D ) error ( o ) error ( o )+ − Using data from IMU 1 and IMU 4 in the same experi-ment, the 3-DoF joint angle estimated by the feedback-based
Fig. 6. Estimated 3-DoF angles around three joint axes ( θ j D es ) ± , ( θ j D es ) ± , ( θ j D es ) ± , where minus and plus sign denote the estimateswithout and with reference frame calibration respectively. iteration process is presented in Fig. 6. The RMS errors arelisted in TABLE II. According to the measurement of axes,the norms of estimated axes’ coordinate error are shown inTABLE III.
1) Effectiveness of reference frame calibration:
How thereference frame calibration affects the estimation accuracyis twofold. Firstly, because the 3-DoF geometric constraintis described in the reference frame, the calibration algorithmis embedded in the progress of estimating 3-DoF joint axes.Secondly, as shown in equation (19), it also involves in theconstruction of the body-to-body alignment, ˜ q b b , thus affectsthe estimation of 3-DoF angles.Firstly, the reference frame calibration could contributeto a better estimation of joint axes. As shown in TABLE Iand TABLE III,the influence of whether calibrating referenceframes or not affects the accuracy of estimated joint axes.It can be seen that not calibrating reference frames resultsin an increased error in joint axis estimation, regardless ofDoFs. In addition, if a comparison is made between thecurves of estimated 1-DoF angles and 3-DoF angles withoutcalibration, as presented in Fig. 5 and Fig. 6, it is indicatedthat the estimated 3-DoF angles without calibration drift alot, which can be seen as the result of drifted estimation ofjoint axes in each sliding window.Secondly, the pointwise reference frame calibration canimprove the accuracy of estimated angles. By isolating theestimation of the coordinates of joint axes and joint postionvectors, as presented by RM SE DoFRF C and
RM SE DoFGAE (0.32 vs. 3.24, 2.06 vs. 4.17) , the performance on the1-DoF experiment using the gimbal indicates even solely calibrating reference frames could make a large improve-ment (90% and 50%, respectively) on accuracy. TABLE IIpresents the comparison of estimating 3-DoF angles with ( + )versus without ( − ) pointwise reference frame calibration,which demonstrates calibrating reference frames can greatlyimprove the estimation accurracy of 3-DoF joints. It shouldbe noted that the performance presented TABLE II deterio-rates drastically when cancelling reference frame calibration,which gives a proof that the cancelling of reference framecalibration could lead to an accumulation of data errorsduring the whole iteration process, even resulting in a wrongdescending direction in LM method. TABLE IIRMSE
OF ESTIMATED O F JOINT ANGLES
Angle ( θ j D es ) − / + ( θ j D es ) − / + ( θ j D es ) − / + RMSE(deg) 18.88/2.49 29.18/1.69 9.90/2.86RMSE(%) 95.1/12.54 34.95/2.03 76.1/22TABLE IIIE
RRORS OF ESTIMATED O F AXIS COORDINATES
Name error ( j D ) error ( j D ) error ( j D )+ −
2) Validity of axis estimation:
For 1-DoF joint angleestimation,
RM SE DoFGAE is larger than
RM SE DoFRF C , nomatter reference frame calibrated or not, which gives anindication of to what content the 1-DoF joint axis estimationcould contribute to the performance. The same data wereused to analyze our algorithm’s performance on the mainaxis j D of the 3-DoF joint, while the data collectedsimultaneously on j D and j D were added into analysis.It can be seen from TABLE II that RMSE of ( θ j D es ) + issmaller than RM SE DoFGAE with calibrating reference frames(2.06 deg ), which indicates that our algorithm could improvethe estimation accuracy of angles around the main axis anddemonstrates the vadility of decoupling a 3-DoF rotationwith a main axis into rotations of three axes.
B. Validation on human subjects1) Accuracy and agreement:
In human lower-limb jointangle estimating tests, the 3-DoF angles of hip, knee andankle were estimated, the result of which was depicted inFig.S3. Both lengths of sliding window ( W n ) and interval I n were 300 sample points for ascending and level walkingwhile 500 sample points for squatting.The resulting RMSerrors and correlation coefficients were presented in TABLEIV and TABLE S1.In addition to testing our algorithm’s performance onlevel walking, we constructed the validation on much worseconditions with larger acceleration and more severe skin arti-ficial movements, which were stair ascending and squatting. TABLE IVRMSE
AND CORRELATION COEFFICIENTS OF LOWER - LIMB JOINTANGLES DURING LEVEL WALKING
Joint Direction Fle/Ext Abd/Add InR/ExRHip 1.72 ± ± ± ± ± ± ± ± ± ± denote the stantard deviation of RMSE amongsubjects. For performance during stair ascent and squat, please referto the Supplementary information. During such tasks, we can still obtain better accuracy inangle around the main axis compared with another self-alignment method reported in [26] during level walking,while angles in the other two directions were estimatedsimultaneously with relatively good accuracy.
2) Repeatability:
Fig. S4 clearly presents the repeatabilitytoward different placement of sensors using the statisticsmethod proposed in [32]. As shown in Fig. S4, over 95%difference between two IMU sets fell in the interval ofmean ± SD for all the subfigures. Through the result wehave obtained, we can conclude that our algorithm is sensor-placement free during the axes’ estimation, in other words,the progress of sensor self-alignment.
3) Influence caused by different lengths of sliding win-dows and intervals:
The overall effect resulted from differ-ent lengths of sliding window and interval has been verifiedby an error metric, given by: f n ( W n , I n ) = (cid:88) j b j µa + a + a , j = hip, knee, ankleµ = a µ jF E RM SE jF E + a µ jAB RM SE jAB + a µ jIR RM SE jIR (26)Where f n ( W n , I n ) denoted a weighted sum of RMS er-rors of every joint and every direction, µ ji ( i = F E, AB, IR ) was the correlation coefficients of each motion and joint.Due to the same importance of each joint, b j was set to 1.We set a = 2 and a = a = 1 , considering that moreattention was paid to Flexion/Extension during most gaitresearch. For each W n and I n , Fig. S5 was presented toshow the variety of performance. For the sake of real timeapplication, the sliding window should be greater than orequal to the interval I n .It can be indicated from Fig. S5 that the metric calculatedby equation (26) gently varies with the changing lengthsof sliding window and interval. Neither smaller nor greatersliding window length were included in this validationbecause algorithm with a smaller sliding window, althoughconvergent, converges to a saddle point which was far awayfrom the optimal solution. This could be given a side proof,shown in TABLE VI, by the RMSE of sensor movement during I detn , in which ( j Di ) det were estimated by just 50measurements. These RMS errors were obviously larger thanthose in TABLE S1. And the main axis didn’t stay stillrelative to IMUs. In contrast, its coordinates in sensor-fixedframes were slowly varying, the speed of which dependedon the sort of moving tasks and the locomotion featuresof subjects. So a greater sliding window cannot improve theaccuracy either. Selection of interval length ( I n ) was relatedto the length of sliding window ( W n ) . During every interval I n , angles were computed according to the axes estimatedin last sliding window. I n should be equal or greater than W n , while a too great I n would reduce the accuracy. So aninterval [ W n , was considered by compromising the realtime application and accuracy requirements. Fig. 7. The boxplot of computing time versus sliding window.
Fig. 7 presented the computing time of 6 iterations withdifferent lengths of sliding window. It is shown that thecomputing time was increasing with the lengthening slidingwindow.
4) Accuracy under the 2-min test:
As shown in TABLEV and Fig. S6, the RMSE were presented after the fivesubjects performed the 2-minute test. Compared with theperformance of the short-run test (shown in TABLE S1),our algorithm on the 2-min test presented a similar meanand a slightly larger standard deviation among subjects. Thisperformance can demonstrate our algorithm’s stability undera long-run test.
TABLE VRMSE
OF THE MIN TEST
Joint Direction Fle/Ext Abd/Add InR/ExRHip 1.69 ± ± ± ± ± ± ± ± ± C. The robustness towards sensor movement
The robustness towards sensor movements was validatedin Fig. 8. The RMS errors before sensor movement, duringabnormal period, during I detn and during I n were presentedin TABLE VI.It is shown in Fig. 8 and TABLE VI, during abnormalperiod after sensor movement, a large deviation was induced Fig. 8. The validation of robustness towards sensor movement. Solid linesdenote the estimation result of our algorithm, while dashed lines denote theresults estimated by the algorithm proposed by [25]TABLE VIT
HE VALIDATION OF ROBUSTNESS TOWARDS SENSOR MOVEMENT
RMSE(deg) Duration BSM DAP I detn I n Fle/Ext 2.54 43.60 13.97 3.50(2.64) (45.19) (57.54) (47.90)Abd/Add 1.23 15.50 1.93 1.19(0.64) (15.19) (14.23) (16.80)InR/ExR 1.89 11.89 4.26 2.12(1.21) (11.39) (11.73) (13.02)Fle/Ext, Abd/Add and InR/ExR denote Flexion/Extension, Abduction/Adduction and Intro/Extra Rotation respectively.BMS denotes beforesensor movement, DAM denotes during abnormal period.The numbersinside brackets are RMSE of the algorithm in [25]. by replacing IMUs. After the variation metrics V wasdetected to exceed the threshold we set, the normal interval I n was interrupted (denoted by the vertical red line). Duringthe temporal interval I detn , the estimates in detection widow(denoted by the red area) were used to calculate 3-DoFangles. As we can see in TABLE VI, the RMSE in thisduration reduced a lot but were still larger than RMSE innext normal interval I n +1 . Comparing the results estimatedby our algorithm and the algorithm in [25], it is indicated thatour algorithm presented strong robustness towards sensormovement, although suffered from a slightly lower accuracyduring BMS. VI. C ONCLUSION
This study demonstrates an initial attempt to developand evaluate a sensor-movement-free 3-DoF joint angleestimation algorithm for lower-limb joints. A pointwisereference frame calibration method and a feedback-basediteration progress are presented. In the experiments withthe 3-DoF gimbal, errors of sensor-to-body alignment andreference frame calibration have been estimated separately. On human subjects, the robustness against sensor placementand sensor movement are validated respectively by therepeatability of different sets of IMUs and a set of RMSEduring detecting and compensating sensor movement. Theresults of this pilot study have shown that the feedback-based iteration design is visible for estimating 3-DoF jointaxes and the novel reference frame calibration algorithm isable to improve accuracy and promote the convergence ofthe whole estimation algorithm. Robustness against sensorplacement and movement has been demonstrated and real-time application has been guaranteed by validating the com-puting time. However, continuing efforts are still requiredto further improve the robustness towards various lengths ofsliding window and interval, and a more computing efficientalgorithm need to be adopted if the algorithm is applied forthe estimation of 3-DoF joint angle without main axis.A
CKNOWLEDGMENT
This research was supported by the Robotics and Rehabili-tation Lab, Harbin Institute of Technology. Acknowledgmentto B & R Intelligence Research Exchange Center.R
EFERENCES [1] Z. Chen, B. Sanjabi, and D. Isa, “A location-based usermovement prediction approach for geolife project,”
Int.J. Comput. Eng. Res , vol. 2, no. 7, pp. 16–19, 2012.[2] J. B. Dingwell and H. G. Kang, “Differences betweenlocal and orbital dynamic stability during human walk-ing,”
Journal of biomechanical engineering , vol. 129,no. 4, pp. 586–593, 2007.[3] F. Wittmann, O. Lambercy, R. R. Gonzenbach, M. A.van Raai, R. H¨over, J. Held, M. L. Starkey, A. Curt,A. Luft, and R. Gassert, “Assessment-driven arm ther-apy at home using an imu-based virtual reality system,”in . IEEE, 2015, pp. 707–712.[4] Y. Ding, I. Galiana, C. Siviy, F. A. Panizzolo, andC. Walsh, “Imu-based iterative control for hip extensionassistance with a soft exosuit,” in .IEEE, 2016, pp. 3501–3508.[5] H. Lamine, S. Bennour, M. Laribi, L. Romdhane,S. Zaghloul et al. , “Evaluation of calibrated kinectgait kinematics using a vicon motion capture system,”
Comput. Methods Biomech. Biomed. Eng , vol. 20, pp.111–112, 2017.[6] H. Graichen, S. Hinterwimmer, R. von Eisenhart-Rothe, T. Vogl, K.-H. Englmeier, and F. Eckstein,“Effect of abducting and adducting muscle acitivityon glenohumeral translation, scapular kinematics andsubacromial space width in vivo,”
Journal of biome-chanics , vol. 38, no. 4, pp. 755–760, 2005.[7] R. J. de Asla, L. Wan, H. E. Rubash, and G. Li, “Sixdof in vivo kinematics of the ankle joint complex: Ap-plication of a combined dual-orthogonal fluoroscopicand magnetic resonance imaging technique,”
Journal of Orthopaedic Research , vol. 24, no. 5, pp. 1019–1027,2006.[8] K. Liu, T. Liu, K. Shibata, and Y. Inoue, “Ambulatorymeasurement and analysis of the lower limb 3d postureusing wearable sensor system,” in . IEEE,2009, pp. 3065–3069.[9] J. Favre, F. Luthi, B. Jolles, O. Siegrist, B. Najafi, andK. Aminian, “A new ambulatory system for compara-tive evaluation of the three-dimensional knee kinemat-ics, applied to anterior cruciate ligament injuries,”
KneeSurgery, Sports Traumatology, Arthroscopy , vol. 14,no. 7, pp. 592–604, 2006.[10] P. Picerno, A. Cereatti, and A. Cappozzo, “Joint kine-matics estimate using wearable inertial and magneticsensing modules,”
Gait & posture , vol. 28, no. 4,pp. 588–595, 2008.[11] A. Brennan, J. Zhang, K. Deluzio, and Q. Li, “Quan-tification of inertial sensor-based 3d joint angle mea-surement accuracy using an instrumented gimbal,”
Gait& posture , vol. 34, no. 3, pp. 320–323, 2011.[12] R. Vitali, S. Cain, R. McGinnis, A. Zaferiou, L. Ojeda,S. Davidson, and N. Perkins, “Method for estimatingthree-dimensional knee rotations using two inertialmeasurement units: Validation with a coordinate mea-surement machine,”
Sensors , vol. 17, no. 9, p. 1970,2017.[13] G. Wu, S. Siegler, P. Allard, C. Kirtley, A. Leardini,D. Rosenbaum, M. Whittle, D. D DLima, L. Cristo-folini, H. Witte et al. , “Isb recommendation on defi-nitions of joint coordinate system of various joints forthe reporting of human joint motionpart i: ankle, hip,and spine,”
Journal of biomechanics , vol. 35, no. 4, pp.543–548, 2002.[14] A. Cappozzo, F. Catani, A. Leardini, M. Benedetti,and U. Della Croce, “Position and orientation in spaceof bones during movement: experimental artefacts,”
Clinical biomechanics , vol. 11, no. 2, pp. 90–100,1996.[15] T. F. Besier, D. L. Sturnieks, J. A. Alderson, and D. G.Lloyd, “Repeatability of gait data using a functionalhip joint centre and a mean helical knee axis,”
Journalof biomechanics , vol. 36, no. 8, pp. 1159–1168, 2003.[16] I. Charlton, P. Tate, P. Smyth, and L. Roren, “Repeata-bility of an optimised lower body model,”
Gait &Posture , vol. 20, no. 2, pp. 213–221, 2004.[17] N. Hagemeister, G. Parent, M. Van de Putte, N. St-Onge, N. Duval, and J. de Guise, “A reproduciblemethod for studying three-dimensional knee kinemat-ics,”
Journal of biomechanics , vol. 38, no. 9, pp. 1926–1931, 2005.[18] A. G. Schache, R. Baker, and L. W. Lamoreux, “Defin-ing the knee joint flexion–extension axis for purposesof quantitative gait analysis: an evaluation of methods,”
Gait & Posture , vol. 24, no. 1, pp. 100–109, 2006.[19] H. Mannel, F. Marin, L. Claes, and L. Durselen, “Establishment of a knee-joint coordinate systemfrom helical axes analysis-a kinematic approach with-out anatomical referencing,”
IEEE Transactions onbiomedical engineering , vol. 51, no. 8, pp. 1341–1347,2004.[20] K. J. ODonovan, R. Kamnik, D. T. OKeeffe, andG. M. Lyons, “An inertial and magnetic sensor basedtechnique for joint angle measurement,”
Journal ofbiomechanics , vol. 40, no. 12, pp. 2604–2611, 2007.[21] E. Palermo, S. Rossi, F. Marini, F. Patan`e, andP. Cappa, “Experimental evaluation of accuracy andrepeatability of a novel body-to-sensor calibration pro-cedure for inertial sensor-based gait analysis,”
Mea-surement , vol. 52, pp. 145–155, 2014.[22] D. Roetenberg, H. Luinge, and P. Slycke, “Xsensmvn: Full 6dof human motion tracking using miniatureinertial sensors,”
Xsens Motion Technologies BV, Tech.Rep , vol. 1, 2009.[23] F. Marin, H. Mannel, L. Claes, and L. D¨urselen,“Correction of axis misalignment in the analysis ofknee rotations,”
Human movement science , vol. 22,no. 3, pp. 285–296, 2003.[24] J. Favre, R. Aissaoui, B. M. Jolles, J. A. de Guise,and K. Aminian, “Functional calibration procedure for3d knee joint angle description using inertial sensors,”
Journal of biomechanics , vol. 42, no. 14, pp. 2330–2335, 2009.[25] L. Vargas-Valencia, A. Elias, E. Rocon, T. Bastos-Filho, and A. Frizera, “An imu-to-body alignmentmethod applied to human gait analysis,”
Sensors ,vol. 16, no. 12, p. 2090, 2016.[26] T. Seel, J. Raisch, and T. Schauer, “Imu-based jointangle measurement for gait analysis,”
Sensors , vol. 14,no. 4, pp. 6891–6909, 2014.[27] P. M¨uller, M.-A. B´egin, T. Schauer, and T. Seel,“Alignment-free, self-calibrating elbow angles mea-surement using inertial sensors,”
IEEE journal ofbiomedical and health informatics , vol. 21, no. 2, pp.312–319, 2016.[28] J. Favre, B. Jolles, R. Aissaoui, and K. Aminian,“Ambulatory measurement of 3d knee joint angle,”
Journal of biomechanics , vol. 41, no. 5, pp. 1029–1035,2008.[29] C. Yi, J. Ma, H. Guo, J. Han, H. Gao, F. Jiang, andC. Yang, “Estimating three-dimensional body orienta-tion based on an improved complementary filter forhuman motion tracking,”
Sensors , vol. 18, no. 11, p.3765, 2018.[30] K. Madsen, H. B. Nielsen, and O. Tingleff, “Methodsfor non-linear least squares problems,” 1999.[31] R. Feliz Alonso, E. Zalama Casanova, and J. G´omezGarc´ıa-Bermejo, “Pedestrian tracking using inertialsensors,” 2009.[32] J. M. Bland and D. Altman, “Statistical methods forassessing agreement between two methods of clinicalmeasurement,”
The lancet , vol. 327, no. 8476, pp. 307–3