Observability of the relative motion from inertial data in kinematic chains
OObservability of the relative motionfrom inertial datain kinematic chains
Manon Kok (cid:63) , Karsten Eckhoff (cid:63)(cid:63) , Ive Weygers † and Thomas Seel (cid:63)(cid:63) (cid:63) Delft Center for Systems and Control, Delft University of Technology, the Netherlands.E-mail: [email protected] (cid:63)(cid:63)
Control Systems Group, Technische Universit¨at Berlin, Germany.Emails: { karsten.eckhoff@campus.tu-berlin.de, [email protected] } † Department of Rehabilitation Sciences, KU Leuven Campus Bruges, Belgium.E-mail: [email protected]
Abstract
In recent years, it has been shown that the motion of kinematic chains can be estimated using mea-surements from inertial sensors placed on segments connected by rotational joints. These methodsspecifically avoid using magnetometer measurements, which are known to cause issues since the mag-netic field at the different sensor locations is typically different. They rely on the assumption thatthe motion of the kinematic chain is sufficiently rich to assure / yield observability of the relativepose. However, a formal investigation of this crucial requirement has not yet been presented and nospecific conditions for observability have so far been given. In this work, we present an observabilityanalysis and show that the relative pose of the body segments is indeed observable under a very mildcondition on the motion. We support these results by a simulation study, in which we also showthe effect of stationary periods in the data and of the amount of excitation on the accuracy of theestimates. We use experimental data from a human gait experiment to show that the excitation levelis sufficient for obtaining accurate estimates even when the subject remains stationary for a periodof 47 seconds halfway during the experiment.
Keywords:
Observability, inertial sensors, motion estimation, kinematic chains. a r X i v : . [ ee ss . S Y ] F e b n y n z n x i y i z i r s i ij p n i segment i x j y j z j r s j ji p n j segment j Figure 1: Two adjacent segments i and j of a kinematic chain connected by a spherical joint. In recent years, miniature inertial measurement units (IMUs) have been used in an increasing numberof mechatronic and biomechanical applications ranging from miniature aerial vehicles and autonomouscars [1–3] to human motion capture and feedback-controlled biomedical devices [4, 5]. These sensorstypically contain three-dimensional gyroscopes, accelerometers and magnetometers. Accelerometers andgyroscopes, measuring the specific force and the angular velocity, respectively, are also called inertialsensors. A range of methods have been proposed that enable IMU-based estimation of orientationsand joint angles with errors in the range of a few degrees in biomechanical systems [6] and even withsub-degree accuracy in mechatronic applications [3].In many of the aforementioned applications, IMUs are used to track the motion of kinematic chainsconsisting of segments that are connected by rotational joints, as illustrated in Fig. 1. These couldfor example be human body segments, aerial vehicles that are connected in flight, or links of a roboticmechanism. The motion of a kinematic chain can be described as a combination of the absolute orienta-tion and position of the entire chain – or the root segment – plus the relative orientations of the joints.To estimate the absolute pose of this kinematic chain, it is typically necessary to combine the inertialmeasurements with additional sensors or model assumptions [7]. Magnetometers are, for instance, oftenused to estimate the absolute heading. A major limitation of this approach is that it relies on the as-sumption that the magnetic field is homogeneous, which is often violated in indoor environments due tothe presence of ferromagnetic material or electronic devices [8, 9]. In recent years, interest has thereforeincreased in developing magnetometer-free approaches to only estimate the relative pose of the kinematicchain.Assuming that each body segment is equipped with an IMU, our interest lies in the estimation ofthe relative pose of the body segments from only inertial measurements, without assumptions on thejoint type. In other words, we focus on three-dimensional joints, without restrictions on the rangeof motion. Previous work has shown that the relative pose of the body segments can be determinedentirely from inertial measurements by taking the connection between the segments and the correspondingkinematic constraints into account [10–12] under the assumption that the motion is “sufficiently rich”.This argument that the performed motions must be sufficiently rich to render the relative orientationobservable is found throughout the mentioned literature, and it is often claimed that this is a mildassumption. However, precise statements on necessary or sufficient conditions have to date only beenfound for the special case of a double-hinge joint system [13]. From a practical point of view, thisrepresents a severe limitation of magnetometer-free inertial motion analysis, since the methods must beused without knowing whether the results are accurate for the specific motion or not. In this work westudy sufficient conditions for observability of the relative pose of the kinematic chain. This opens upfor crucial performance guarantees in a large range of applications. It can also be used to instruct usersto perform certain movements to guarantee this performance.The question under which conditions the relative motion states of a kinematic chain can be determinedfrom magnetometer-free IMU readings and a kinematic constraint leads to observability analysis ofa nonlinear dynamic system. It is known from systems and control theory that this is a non-trivialproblem and that no general closed-form condition for observability of nonlinear systems exist. However,some theoretical results exist, and we leverage them and derive a solid condition for accurate results in1agnetometer-free inertial motion analysis. The contributions of the present work are four-fold:1. We show the mathematical equivalence of the two most commonly used models to take kinematicconstraints due to the connection between body segments into account. Moreover, we propose tworeduced model representations which are also mathematically equivalent but have state vectors ofa smaller size.2. We present an observability analysis that is valid for these four models and that states under which(mild) conditions the relative orientation can be uniquely determined.3. We analyse the estimation accuracy as a function of the type of motion and the level of excitation.4. An experimental validation will show that good accuracies are achieved for realistic human motionand sensor noise levels.
To estimate the motion of a kinematic chain, the most common approach is to, as we also do in thiswork, use a full IMU setup, which means that an IMU is placed on each segment of the kinematicchain. Methods with sparse sensor setups have been proposed but require additional assumptions on thekinematic structure and its motion [13–15]. As discussed in Section 1, magnetometer-free approacheshave been proposed that determine the relative orientation between segments by taking the connectionbetween the segments and the corresponding kinematic constraints into account. The practical relevanceof such methods is high, since they enable accurate motion tracking in arbitrary magnetic environmentsand thus in a wide field of applications. In the following, we briefly summarise existing magnetometer-free approaches for relative-motion tracking of kinematic chains, and we answer two questions: Whichmagnetometer-free methods have been proposed for inertial motion capture, and what is known aboutthe conditions under which these methods are known to work or fail?For joints with only one degree of freedom, i.e. hinge joints or revolute joints, several methods havebeen proposed that exploit constraints of the relative orientation between the adjacent segments [16, 17].However, the kinematic constraints become singular when the hinge joint axis is vertical, and the relativeheading cannot be tracked if the systems remains at the singularity. Moreover, to apply such methods,the direction of the joint axis must be known in the sensor frames of both adjacent segments, whichrequires suitable sensor-to-segment calibration routines [18, 19]. For joints with two degrees of freedom,e.g. saddle joints or the human elbow, magnetometer-free methods have been proposed that determinethe relative heading by exploiting kinematic constraints of the angular rates [20] and the orientations [21].As in the hinge joint case, the relative heading cannot be tracked if one of the joint axes remains vertical,and the methods require identification of both joint axes’ coordinates in the corresponding sensor frame[22]. Finally, for joints with up to three degrees of freedom, range-of-motion constraints can be exploitedon a moving-window to track the relative heading [23]. This approach requires knowledge of the rangeof motion and persistent excitation in the sense of sufficient coverage of that range.The aforementioned methods have in common that they apply only to joints with limited degrees offreedom or range of motion. Besides such rotational constraints, it has also been shown that one canexploit the mere fact that the segment ends that are connected by the joint cannot move apart. Thisconstraint is independent of the joint’s degrees of freedom and range of motion, and it can be formalisedin two different ways. In [10, 11, 24], it is formulated in terms of the joint centre position, while theconstraint is written in terms of the joint centre acceleration in [12, 25–27]. For both versions, it wasdemonstrated that exploitation of the constraint enables determining the relative orientation betweenthe segments at least if the performed motion provides sufficient excitation. In this work, we focus onderiving sufficient conditions for observability of the relative orientations.
We consider a kinematic chain with at least two rigid segments that are connected by a rotational joint.Our interest lies in estimating the relative pose of these connected segments, using sensors placed oneach segment. In other words, our focus is on determining the relative position, velocity and orientationfrom the accelerometer measurements y a ,i,t and the gyroscope measurements y ω,i,t . Here, the subindex i explicitly indicates that these are the measurements from sensor S i , where i = 1 , . . . , N S . We denote the2osition and velocity of sensor i as p n i and v n i , respectively, where the superscript n is used to indicatethat these vectors are expressed in the navigation frame n , which is a fixed, static coordinate frame.The origin of this frame and the direction of its axes are irrelevant in our problem formulation since weare only interested in the relative position and orientation of the sensors. The orientation of each sensoris described in terms of a rotation matrix R ns i , which denotes the orientation from the sensor frame s i to the static coordinate frame n . The origin of the frame s i lies at the centre of the accelerometertriad of sensor i and its axes are aligned with the inertial sensor axes. We assume that the locationand the orientation of the sensors on the body segments are known. These can for instance be obtainedfrom pre-calibration algorithms in [18, 28]. Hence, estimating the relative position and orientation of thesensors becomes equivalent to estimating the relative position and orientation of the body segments.We assume standard measurement models for the accelerometer and gyroscope measurements y a ,i,t = f s i i,t + b a ,i + e a ,i,t = R s i n t (cid:0) a n i,t − g n (cid:1) + b a ,i + e a ,i,t , (1a) y ω,i,t = ω s i i,t + b ω,i + e ω,i,t , (1b)where f s i i,t and ω s i i,t , respectively, denote the specific force and the angular velocity at time t of sensor i expressed in sensor frame s i , a n i denotes the acceleration of sensor i expressed in the navigation frame n and g n denotes the Earth’s gravity. Furthermore, b a ,i and b ω,i denote the accelerometer and gyroscopesensor biases, respectively. These are often assumed to be constant for the duration of the data set.Finally, e a ,i,t and e ω,i,t denote the accelerometer and gyroscope measurement noise which is assumed tobe Gaussian i.i.d. distributed.In the remainder of this section, we will present four different continuous-time state space models thatcan be used to determine the relative pose of the segments. All four state space models use the inertialsensor measurements as an input to the dynamics. Hence, we express the continuous-time dynamics ofthe position, velocity and orientation in terms of the continuous-time specific force and angular velocity.We consider the information about the connection of the body segments as a pseudo-measurement model.The two state space models presented in Section 3.1 and Section 3.2 are widely used in literature. Wepresent two novel state space models in Section 3.3. In Section 3.4 we will show that the four modelsare mathematically equivalent. In the model used in [10, 24], the relative pose is estimated by parametrising the system in terms of theabsolute position p n i , velocity v n i and orientation R ns i i . Assuming that the orientation is parametrised asa three-dimensional vector [7], the state vector is of size 9 N S . Note that this is clearly an overparametri-sation since the inertial sensors do not provide any information about the absolute position, velocity andheading of the body. It is, however, a very flexible model since it can straightforwardly include additionalinformation such as GPS measurements. The dynamics is modelled as˙ p n i = v n i , (2a)˙ v n i = a n i = R ns i f s i i + g n , (2b)˙ R ns i = R ns i [ ω s i i × ] , (2c)for i = 1 , . . . , N S , where [ · × ] denotes the matrix cross product. Note that the specific force f i and theangular velocity ω i are measured by the inertial sensors as in (1). The connection between the bodysegments is modelled as 0 = p n i + R ns i r s i ij − p n j − R ns j r s j ji . (3)Here, r s i ij is the distance from sensor S i to the joint centre connecting the segments i and j , expressed inthe sensor frame s i . Note the reversed subindices for r s j ji to denote the distance from sensor S j to the jointcentre connecting the segments i and j . Hence, (3) models the equivalence between the location of thejoint centre as expressed in the coordinates of sensors i and j . Assuming that r s i ij and r s j ji are known, themodel can both be used as a constraint [10] in an optimisation problem and as a measurement model [24]in e.g. an extended Kalman filter implementation. 3 .2 Modelling each segment’s orientation In the model used in [12, 25, 26], the relative pose is modelled only in terms of the orientation. Hence,assuming again that the orientation is parametrised as a three-dimensional vector [7], the state vector isof size 3 N S . If the body is assumed to be rigid and the location of the sensors is known, it is possibleto compute the full relative pose from the relative orientations. The dynamics is modelled by (2c) for i = 1 , . . . , N S . The connection between the body segments is modelled as0 = R ns i (cid:0) f s i i + (cid:0) [ ω s i i × ] + [ ˙ ω s i i × ] (cid:1) r s i ij (cid:1) − R ns j (cid:0) f s j j + (cid:0) [ ω s j j × ] + [ ˙ ω s j j × ] (cid:1) r s j ji (cid:1) , (4)where ˙ ω denotes the angular acceleration. This models the equivalence between the specific force orequivalently the acceleration of the joint centre as expressed in the coordinates of body segments i and j . In the remainder of this paper, we will also make use of the shorthand notation f nc ij ,i to denote theacceleration at the joint centre c ij , measured by sensor i , expressed in sensor frame s i , defined as f s i c ij ,i = f s i i + (cid:0) [ ω s i i × ] + [ ˙ ω s i i × ] (cid:1) r s i ij . (5)Using that notation, (4) can equivalently be written as0 = R ns i f s i c ij ,i − R ns j f s j c ij ,j . (6) The models from Sections 3.1 and 3.2 are clearly overparametrisations of the problem. Although thisis not widely used in literature, it is also possible to use a minimal representation. The benefit of over-parametrisation is the flexibility of adding more information to the system. The benefit of parametrisingthe system with less states is that it is computationally more efficient. We define the relative position,velocity and orientation as p s i r ,ij = R s i n (cid:0) p n i − p n j (cid:1) , (7a) v s i r ,ij = R s i n (cid:0) v n i − v n j (cid:1) , (7b) R s i s j = R s i n R ns j , (7c)where p s i r ,ij and v s i r ,ij are the relative position and velocity, respectively, of segment i with respect tosegment j , expressed in segment i . It is now possible to write the dynamics of these relative states as˙ p s i r ,ij = − [ ω s i i × ] p s i r ,ij + v s i r ,ij , (8a)˙ v s i r ,i = − [ ω s i i × ] v s i r ,ij + f s i i − R s i s j f s j j , (8b)˙ R s i s j = − [ ω s i i × ] R s i s j + R s i s j (cid:2) ω s j j × (cid:3) . (8c)where we made use of (1a) and (2).The model (3) can be expressed in these states as0 = p s i r ,i + r s i ij − R s i s j r s j ij . (9)Estimating the relative pose, the size of the state vector reduces from 9 N S in Section 3.1 to 9( N S − R s i s j . This reduces the size of the state vector to 3( N S − Making use of the dynamic models (2), the first and second derivatives of the model (3) can be writtenas dd t (cid:0) p n i + R ns i r s i ij − p n j − R ns j r s j ij (cid:1) = v n i + R ns i [ ω i × ] r s i ij − v n j − R ns j [ ω j × ] r s j ij , (10a)d d t (cid:0) p n i + R ns i r s i ij − p n j − R ns j r s j ij (cid:1) = a n i + R ns i [ ω i × ] r s i ij + R ns i [ ˙ ω i × ] r s i ij − a n j − R ns j [ ω j × ] r s j ij − R ns j [ ˙ ω j × ] r s j ij . (10b)4t can therefore be concluded that the double derivative of the model for the joint position (3) is equalto the model for the joint acceleration (4). In other words, these models are mathematically equivalentand an observability analysis of one of these models will straightforwardly hold for the other. Note thatin practice this does not imply that the models are actually equivalent. They can, for instance, behavedifferently in the presence of noise and contrary to the reduced model from Section 3.2, the extendedmodel from Section 3.1 straightforwardly opens up for including additional information such as absoluteposition measurements. In this section we will study the observability of the relative pose of the body segments using the modelspresented in Section 3. Without loss of generality we focus on a two-segment system and will use themodel from Section 3.2. Hence, we ask the question under which motions is it possible to uniquelydetermine the relative orientation of the two body segments using perfect (bias- and noise-free) inertialmeasurements and the models (2c) and (4).Let us write the model from Section 3 as a general nonlinear state space model˙ x = f ( x, u ) , y = h ( x, u ) , (11)where x ∈ M = SO × SO with x = (cid:0) x T i x T j (cid:1) T and x i , x j denote the time-varying state vectorrepresenting the orientation of the two body segments. The (pseudo-)measurement vector y ∈ R isrepresented by (4) and models the connection between the body segments in terms of the acceleration ofthe joint centre. Furthermore, u ∈ R denotes the specific force, the angular velocity and the angularacceleration of each of the sensors. Note that under the assumption of bias- and noise-free measurements,the specific force and the angular velocity are directly measured by the inertial sensors. The angularacceleration can be obtained from the gyroscope measurements by numerical differentiation.The purpose of observability analysis is to answer the question whether the states of the model (11)can uniquely be determined using knowledge about the measurements y and the inputs u [29]. Assumingthat the functions f and h in (11) are smooth functions of their arguments and that the input u issmooth, we will study the observability of our system by considering N ≥ y denotedby y ( N ) . For some known input u , we define the mapping Φ u,N : M → R N byΦ u,N ( x ( t )) = y ( t )˙ y ( t )... y ( N ) ( t ) . (12)A system is observable if, for a given known input u , there exists an N ≥ u,N ( x ( t )) is injective, i.e. there are no two points in the state space that yield the same vector ofoutput derivatives [30]. In this work, we will build on the stronger notion of local observability [29, 31],defined as: Definition 1.
A system is called locally observable if for any x ∈ M , it is possible to distinguish x from all x ∈ U \ x for every open neighbourhood U of x . Note that local observability implies that it is possible to instantaneously and uniquely determinethe state x from the inputs u and outputs y .Before analysing the observability of our model (11) in detail, let us start with noticing two importantproperties of our model. Firstly, (4) depends on both the orientation of the first and that of the secondsegment. The system, however, does not include any information about the absolute orientation of thesensors. It is therefore clearly neither locally observable nor observable. Our interest, however, lies indetermining the relative orientation of the sensors. Hence, we focus only on observability of the relativeorientation. In other words, we study the observability of the system under the condition that one ofthe orientations is known or arbitrarily set to a certain value.A second important property of the system (11) is that its observability depends on the motionperformed, i.e. on the inputs to the system. This can most clearly be seen by writing down the model5onstraint (4) and its first order derivative as y ( t ) = 0 = R ns i f s i c ij ,i − R ns j f s j c ij ,j , (13a)˙ y ( t ) = 0 = dd t R ns i f s i c ij ,i − dd t R ns j f s j c ij ,j , (13b)where we make use of the shorthand notation from (5) and (6). In the case of no movement, forinstance, these equations are clearly linearly dependent and hence the system is not observable. Becausethe observability depends on the inputs, we will study the observability at time t . Based on theseobservations, we extend on Definition 1 and define: Definition 2.
The relative orientation is locally observable at time t if, under the assumption that theorientation of one of the segments is known, the system (11) is locally observable at time t . Inputs for which it is not possible to uniquely determine a state x from the measurements are calledsingular inputs [29]. We can now analyse for which inputs the relative orientation of the system (11)is observable, i.e. which inputs are non-singular. This brings us to the main result of our observabilityanalysis: Theorem 1.
The relative orientation R s j s i of the system (11) is locally observable according to Defini-tion 2 for any time instance for which the specific force a nc ij − g n of the joint centre and its derivative ˙ a nc ij are linearly independent.Proof. The system of equations (13) can be rewritten as f nc ij ,i = R ns j f s j c ij ,j , ˙ a nc ij ,i = dd t R ns j f s j c ij ,j , (14)where we made use of the definition of the specific force from (1a) and assume that the orientation R ns i is known, following Definition 2. In (14) we can now recognise Wahba’s problem [32], also known as theorthogonal Procrustes problem. Based on this, it is well-known that the orientation R ns j can uniquelybe determined from (14) if f nc ij ,i and ˙ a nc ij ,i are linearly independent.Theorem 1 implies that the system is not observable if e.g. the acceleration of the joint centre isconstant or when the joint centre only moves along the vertical axis. Relatively little movement isrequired for observability. For instance, the system is observable if the joint centre moves along somenon-vertical axis with a non-constant acceleration. In Section 5, we will illustrate this based on asimulation study. In the case the system is not observable at all time instances, integration of theinertial measurements in (2) results in a drift of the relative pose of the body segments. However, assoon as the user performs non-singular input motions, the system becomes observable and the drift canbe removed using e.g. a smoothing-based approach. This is illustrated experimentally in Section 6. We illustrate the observability results from Section 4 using Monte Carlo simulations in which we simulatedifferent types of motion. Without loss of generality we again focus on two connected segments and usethe model from Section 3.2. To estimate the orientation of the two sensors, we use the methods presentedin [12]. That work presents both a filtering and a smoothing algorithm for the models (2c) and (4) and wasextensively validated on experimental data. The filtering algorithm is an extended Kalman filter, whilethe smoothing algorithm uses a Gauss-Newton optimisation. For more details we refer the interestedreader to [12] and [7].We simulate inertial measurements at 100Hz with noise and bias properties that have a similar orderof magnitude as found in standard inertial sensors. The accelerometer noise is simulated to be zero-meani.i.d. Gaussian with a standard deviation of 0 . ms . The gyroscope noise is simulated to be zero-meani.i.d. Gaussian with a standard deviation of 1 ◦ s . Furthermore, for each Monte Carlo simulation wedraw constant accelerometer and gyroscope biases from the uniform distributions U (cid:0) − . ms , . ms (cid:1) and U (cid:16) − . ◦ s , . ◦ s (cid:17) , respectively. Likewise, the distances from the sensors i and j to the joint are6
10 20 30 40 − Time [s] ω r e l [ ◦ / s ] − Time [s] a n c i j [ m / s ] Time [s] θ [ ◦ ] − Time [s] ω r e l [ ◦ / s ] − Time [s] a n c i j [ m / s ] Time [s] θ [ ◦ / s ] Figure 2: Relative angular velocity ω rel (left column) and acceleration of the joint centre (middle col-umn) for the observable motion (top) and the unobservable motion (bottom) described in Section 5.1with the x -, y - and z -directions depicted in blue, green and red, respectively. Right column: Meanangular error θ and the one-standard-deviation intervals for the filtering algorithm (blue), the smoothingalgorithm (green) and the integration of the gyroscope signal (red) for the observable motion (top) andthe unobservable motion (bottom) described in Section 5.1.set to constant values drawn from uniform distributions, i.e. r s i ij = (cid:2) U (0 . , . (cid:3) T , r s j ji = − (cid:2) U (0 . , . (cid:3) T . We quantify the estimation error in terms of the smallest angle θ by whichthe estimated relative orientation ˆ R s i s j must be rotated to become identical to the true relative orientation R s i s j [33]. We initialize each simulation such that the initial relative orientation error is 10 ◦ .In Section 5.1 we first exemplify the results from Theorem 1 for both an observable and for anunobservable type of motion. In Section 5.2 we subsequently show the effect on the relative orientationestimates if the motion is not locally observable for a significant amount of time due to the absenceof motion. Finally, in Section 5.3 we study the effect on the accuracy of the relative orientation fordifferent amounts of excitation. The results are also illustrated in the supplementary video available on https://tinyurl.com/y4b2esjm . In this section we will illustrate the results from Theorem 1 by simulating specific motions that are (not)observable according to the theorem and studying the estimated relative orientation using the algorithmfrom [12].First, we study the estimates of the relative orientation as the joint centre moves with non-constantacceleration for 45 seconds along an axis that is perpendicular to the gravity direction. The two segmentsdo not rotate around the joint centre. This motion is illustrated in terms of the relative angular velocity ω rel and the acceleration of the joint centre a nc ij in Fig. 2. Although the excitation of the system is onlyvery limited, for almost all time instances the change of acceleration is non-zero in a direction orthogonalto the acceleration due to gravity, i.e. the vectors a nc ij − g n and ˙ a nc ij are non-parallel to each other. Hence,according to Theorem 1, the relative orientation is locally observable for almost all time instances. Themean and standard deviation of the relative orientation estimates for the 100 Monte Carlo simulationsare depicted in Fig. 2. As can be seen, the error in the relative orientation converges quickly and remainssmall for both the filtering and the smoothing algorithms. More specifically, after the first five secondsthat are needed for the initial convergence, the mean errors of the filtering and smoothing algorithms are1 . ◦ and 0 . ◦ , respectively. The maximum errors are 4 . ◦ and 4 . ◦ , respectively. For completeness,we also include the error from integration of the gyroscope only, which can be seen to grow over time upto a maximum of 26 . ◦ .Next, we study the estimates of the relative orientation as the joint centre moves for 150 seconds alongthe direction of the gravity vector with non-constant acceleration. The two segments rotate arbitrarilyaround the joint centre. This motion is again illustrated in Fig. 2. For all time instances the change ofacceleration is only non-zero in the direction of the acceleration due to gravity, i.e. the vectors a nc ij − g n and7
20 40 60 80 100 − Time [s] ω r e l [ ◦ / s ] − Time [s] a n c i j [ m / s ] Time [s] θ [ ◦ ] Figure 3: Relative angular velocity ω rel (top) and acceleration of the joint centre (middle) for thetemporarily unobservable motion described in Section 5.1 with the x -, y - and z -directions depicted in blue,green and red, respectively. Bottom: Resulting mean angular error θ and the one-standard-deviationintervals for the filtering algorithm (blue), the smoothing algorithm (green) and the integration of thegyroscope signal (red).˙ a nc ij and the vectors of all higher derivatives of the constraint are parallel to each other. Hence accordingto Theorem 1, the relative orientation is not locally observable. The mean and standard deviation ofthe relative orientation estimates from the 100 Monte Carlo simulations are depicted in Fig. 2. As canbe seen, the error in the relative orientation increases over time for both the filtering and the smoothingalgorithms up to a maximum of 61 . ◦ and 45 . ◦ , respectively, compared to a maximum error of 73 . ◦ for the integration of the gyroscope signal only. Note that even though the complete relative orientationis not observable, some parts are still observable, which explains why the error initially goes down butthen increases over time. We now study the effect of stationary periods in the data on the estimation accuracy. To this end, wesimulate 110 seconds of data for which the joint centre arbitrarily changes its position in the referenceframe with a non-constant acceleration for the first 30 seconds and the last 20 seconds of the simulation.During 60 seconds in the middle of the data set, the segments are both assumed to be at rest. Duringthese times, the system is temporarily not locally observable. This motion is illustrated in Fig. 3. Ascan be seen in the bottom plot of Fig. 3, the error in the relative orientation estimated by the filteringalgorithm grows during the 60 seconds of stationarity to a mean error of 5 . ◦ and a maximum error of28 . ◦ . In the smoothing algorithm, all measurements are used to compute the estimates. Hence, thisalgorithm is able to keep the errors low with a mean error of 1 . ◦ and a maximum error of 10 . ◦ . Notethat in practice, divergence of the estimator would only be an issue during non-observable motions thatare not stationary, since stationary poses can be detected and the estimation can be paused while thesegments do not move. 8
20 40 60 80 100 − Time [s] a n c i j [ m / s ] Time [s] θ [ ◦ ] Time [s] o t Figure 4: Acceleration of the joint centre (top) for the motion with varying degrees of excitation describedin Section 5.3 with the x -, y - and z -directions depicted in blue, green and red, respectively. Middle:Resulting mean angular error θ and the one-standard-deviation intervals for the filtering algorithm (blue),the smoothing algorithm (green) and the integration of the gyroscope signal (red). Bottom: The measure o t as defined in (15), visualising the amount of excitation and its effect on the observability of the relativepose. We also study the effect of the amount of excitation on the quality of the estimates. To this end, wesimulate the same motion as the observable motion in Section 5.1 but vary the level of excitation. Hence,the relative movement of the sensors looks similar to that in the top row of Fig. 2, but with the jointcentre acceleration varying in excitation level as shown in Fig. 4. The angular error in the estimatesof the filtering algorithm can be seen to increase up to a mean error of 3 . ◦ and a maximum error of10 . ◦ during the time of low excitation. The angular errors in the estimates of the smoothing algorithmcan be seen to barely be affected by this period of low excitation. To further visualise the amount ofexcitation and its effect on the observability of the relative pose, we compute for each time instance t ameasure o t that equals the norm of the outer product of a nc ij ,t − g n and ˙ a nc ij ,t , averaged over the last 100samples as o t = (cid:88) τ =0 (cid:107) ( a nc ij ,t − τ − g n ) × ˙ a nc ij ,t − τ (cid:107) . (15)Note that according to Theorem 1, the vectors a nc ij ,t − g n and ˙ a nc ij ,t need to be linearly independent forthe relative orientation to be observable at time instance t . We illustrated observability of the relative orientation for different motion types on simulated data inSection 5. To evaluate these results on a biomechanical application with realistic excitation levels, a9igure 5: Experimental set-up with inertial sensors (and surrounding optical cluster markers) attachedlateral and mid-distance on the thigh and shank segments. The subject executed a walk activity of 198 s,followed by a sitting activity of 47 s and another walk activity of 185 s.subject was asked to consecutively walk for 198 s in a comfortable self-selected pace and direction, sitdown for 47 s and walk again for 185 s.During the experiment, accelerometer and gyroscope measurements from thigh- and shank-worn IMUs(MTw Awinda, Xsens) and marker trajectories from optical cluster markers (VICON, Vero, Vicon MotionSystems Ltd) were simultaneously captured (100Hz sampling rate) via a hardware time synchronisation.A picture of the experimental setup can be found in Figure 5. The study has been approved by theinstitutional research committee of KU Leuven (Clinical trial center UZ Leuven, Nr. S58936). All testswere done in accordance with the 1964 Helsinki declaration and its later amendments.Relative sensor orientations were again estimated in a filtering and smoothing implementation fol-lowing [12]. Estimated and reference relative orientations were aligned using Theorem 4.2 from [34].The resulting angular distance errors for both filtering and smoothing implementations can be foundin Figure 6. We also show the measure o t as defined in (15) computed from the data from one of thesensors. The measure clearly shows that the relative orientation is not observable when the subject issitting still.With respect to the knee joint, a walking movement at a self-selected comfortable pace containssufficient amount excitation of the joint centre to make the relative orientation locally observable. Themean angular distance that the filter and smoother achieve are 5 . ◦ and 3 . ◦ , respectively, for thepart of the data where the person was walking. During the stationary part, the error for the smoothingimplementation remained small with a maximum angular distance of 4 . ◦ . The error of the filtering im-plementation grew to 19 . ◦ after 47 seconds, but quickly recovered at walking onset after the temporaryunobservable static time period. This behaviour is in line with the simulation results from Section 5.2. In this work we have derived conditions on the observability of the relative pose of a kinematic chain,estimated using inertial sensors placed on adjacent body segments. We have shown that the relativeorientation is observable as long as the specific force of the joint centre and its derivative are linearly in-dependent. Simulations and experimental results revealed that excitation alone does not suffice and thataccurate estimates are obtained in motions that fulfil the derived criterion. These obtained insights fromsystem and control theory overcome the need to blindly hope for sufficient excitation in magnetometer-free inertial motion tracking. They are thus expected to have an impact on many practical applications.The derived observability condition can be used to provide crucial performance guarantees as well asto instruct users to perform certain motions to ensure observability. Future work could include a moreextensive study on the accuracy of the pose estimates as a function of the sensor noise levels and theouter product of the specific force of the joint centre and its derivative to provide bounds on the errorof the estimated pose. 10
100 200 300 40005101520
Time [s] θ [ ◦ ] Time [s] o t Figure 6: Top: Angular error θ of the filtering algorithm (blue) and the smoothing algorithm (green) forthe experimental data described in Section 6. Bottom: The measure o t as defined in (15), visualisingthe amount of excitation and its effect on the observability of the relative pose. Acknowledgement
The conducted human experiments were funded by the European Regional Development Fund – We-labfor HTM [grant number 1047].
References [1] F. Hoffmann, N. Goddemeier, and T. Bertram. Attitude estimation and control of a quadrocopter.In
Proceedings of the International Conference on Intelligent Robots and Systems (IROS) , pages1072–1077, Taipei, Taiwan, October 2010.[2] G. Wan, X. Yang, R. Cai, H. Li, Y. Zhou, H. Wang, and S. Song. Robust and precise vehiclelocalization based on multi-sensor fusion in diverse city scenes. In
Proceedings of the InternationalConference on Robotics and Automation (ICRA) , pages 4670–4677, Brisbane, Australia, May 2018.[3] Vicent Rodrigo Marco, Jens Kalkkuhl, J¨org Raisch, Wouter J. Scholte, Henk Nijmeijer, and ThomasSeel. Multi-modal sensor fusion for highly accurate vehicle motion state estimation.
Control En-gineering Practice , 100:104409, 2020. ISSN 0967-0661. doi: https://doi.org/10.1016/j.conengprac.2020.104409.[4] S. Zihajehzadeh, P. K. Yoon, B. Kang, and E. J. Park. UWB-aided inertial motion capture forlower body 3-D dynamic activity and trajectory tracking.
IEEE Transactions on Instrumentationand Measurement , 64(12):3577–3587, 2015.[5] T. Seel, D. Graurock, and T. Schauer. Realtime assessment of foot orientation by accelerometersand gyroscopes.
Current Directions in Biomedical Engineering , 1(1):466–469, 2015. ISSN 2364-5504.doi: 10.1515/cdbme-2015-0112.[6] T. L. Baldi, S. Scheggi, L. Meli, M. Mohammadi, and D. Prattichizzo. Gesto: A glove for enhancedsensing and touching based on inertial and magnetic sensors for hand tracking and cutaneous feed-back.
IEEE Transactions on Human-Machine Systems , 47(6):1066–1076, 2017.[7] M. Kok, J. D. Hol, and T. B. Sch¨on. Using inertial sensors for position and orientation estimation.
Foundations and Trends on Signal Processing , 11(1–2):1–153, 2017.[8] W. H. K. de Vries, H. E. J. Veeger, C. T. M. Baten, and F. C. T. van der Helm. Magnetic distortionin motion labs, implications for validating inertial magnetic sensors.
Gait & Posture , 29(4):535–541,June 2009. ISSN 0966-6362. doi: 10.1016/j.gaitpost.2008.12.004.119] Y. Shu, C. Bo, G. Shen, C. Zhao, L. Li, and F. Zhao. Magicol: Indoor localization using pervasivemagnetic field and opportunistic WiFi sensing.
IEEE Journal on Selected Areas in Communications ,33(7):1443–1457, July 2015. ISSN 0733-8716. doi: 10.1109/JSAC.2015.2430274.[10] M. Kok, J. D. Hol, and T. B. Sch¨on. An optimization-based approach to human body motion captureusing inertial sensors. In
Proceedings of the 19th World Congress of the International Federation ofAutomatic Control (IFAC) , pages 79–85, Cape Town, South Africa, August 2014.[11] J. D. Hol.
Sensor fusion and calibration of inertial sensors, vision, ultra-wideband and GPS . Dis-sertation no. 1368, Link¨oping University, Link¨oping, Sweden, June 2011.[12] I. Weygers, M. Kok, H. de Vroey, T. Verbeerst, M. Versteyhe, H. Hallez, and K. Claeys. Drift-freeinertial sensor-based joint kinematics for long-term arbitrary movements.
IEEE Sensors Journal ,20(14):7969–7979, 2020.[13] K. Eckhoff, M. Kok, S. Lucia, and T. Seel. Sparse magnetometer-free inertial motion tracking–acondition for observability in double hinge joint systems. In
Proceedings of the 21st World Congressof the International Federation of Automatic Control (IFAC) , pages 1–8, Berlin, Germany, 2020.[14] Yinghao Huang, Manuel Kaufmann, Emre Aksan, Michael Black, Otmar Hilliges, and Gerard Pons-Moll. Deep inertial poser: Learning to reconstruct human pose from sparse inertial measurementsin real time.
ACM Transactions on Graphics , 37:1–15, 12 2018.[15] Timo von Marcard, Bodo Rosenhahn, Michael J. Black, and Gerard Pons-Moll. Sparse inertialposer: Automatic 3D human pose estimation from sparse IMUs.
Computer Graphics Forum , 36:349–360, 2017.[16] Glen Cooper, Ian Sheret, Louise McMillian, Konstantinos Siliverdis, Ning Sha, Diana Hodgins,Laurence Kenney, and David Howard. Inertial sensor-based knee flexion/extension angle estimation.
Journal of Biomechanics , 42(16):2678–2685, December 2009. ISSN 0021-9290. doi: 10.1016/j.jbiomech.2009.08.004.[17] D. Laidig, T. Schauer, and T. Seel. Exploiting kinematic constraints to compensate magneticdisturbances when calculating joint angles of approximate hinge joints from orientation estimates ofinertial sensors. In
Proceedings of the International Conference on Rehabilitation Robotics (ICORR) ,pages 971–976, London, UK, July 2017.[18] Thomas Seel, Thomas Schauer, and J¨org Raisch. Joint axis and position estimation from inertialmeasurement data by exploiting kinematic constraints. In
Proceedings of the IEEE InternationalConference on Control Applications , pages 45–49, Dubrovnik, Croatia, October 2012.[19] Fredrik Olsson, Manon Kok, Thomas Seel, and Kjartan Halvorsen. Robust plug-and-play joint axisestimation using inertial sensors.
MDPI Sensors , 20(12):3534, 2020.[20] D. Laidig, D. Lehmann, M. A. B´egin, and T. Seel. Magnetometer-free realtime inertial motiontracking by exploitation of kinematic constraints in 2-DOF joints. In
Proceedings of the 41st IEEEInternational Engineering in Medicine and Biology Conference (EMBC) , Berlin, Germany, July2019.[21] H. J. Luinge, P. H. Veltink, and C. T. M. Baten. Ambulatory measurement of arm orientation.
Journal of Biomechanics , 40(1):78–85, January 2007. ISSN 0021-9290, 1873-2380. doi: 10.1016/j.jbiomech.2005.11.011.[22] D. Laidig, P. Mueller, and T. Seel. Automatic anatomical calibration for IMU-based elbow anglemeasurement in disturbed magnetic fields.
Current Directions in Biomedical Engineering , 3(2):167–170, 2017. ISSN 2364-5504. doi: 10.1515/cdbme-2017-0035.[23] D. Lehmann, D. Laidig, R. Deimel, and T. Seel. Magnetometer-free inertial motion tracking ofarbitrary joints with range-of-motion constraints. In
Proceedings of the 21st World Congress of theInternational Federation of Automatic Control (IFAC) , pages 1–8, Berlin, Germany, July 2020.[24] M. Miezal, B. Taetz, and G. Bleser. On inertial body tracking in the presence of model calibrationerrors.
MDPI Sensors , 16(7):1132, 2016. 1225] J. K. Lee and T. H. Jeon. Magnetic condition-independent 3D joint angle estimation using inertialsensors and kinematic constraints.
MDPI Sensors , 19(24):5522, 2019.[26] B. Fasel, J. Sp¨orri, J. Chardonnens, J. Kr¨oll, E. M¨uller, and K. Aminian. Joint inertial sensororientation drift reduction for highly dynamic movements.
IEEE Journal of Biomedical and HealthInformatics , 22(1):77–86, 2018.[27] Eva Dorschky, Marlies Nitschke, Ann-Kristin Seifer, Antonie J. van den Bogert, and Bjoern M.Eskofier. Estimation of gait kinematics and kinetics from inertial sensor data using optimal controlof musculoskeletal models.
Journal of Biomechanics , 95:109278, 2019.[28] F. Olsson and K. Halvorsen. Experimental evaluation of joint position estimation using inertialsensors. In
Proceedings of the 20th International Conference on Information Fusion , pages 1–8,Xi’an, China, July 2017.[29] Gildas Besan¸con. An overview on observer tools for nonlinear systems. In Gildas Besan¸con, editor,
Nonlinear observers and applications , pages 1–33. Springer, 2007.[30] Gildas Besancon. A link between output time derivatives and persistent excitation for nonlinearobservers. In
Proceedings of the 10th IFAC Symposium on Nonlinear Control Systems (NOLCOS) ,pages 493–498, Monterey, California, USA, August 2016.[31] Robert Hermann and Arthur Krener. Nonlinear controllability and observability.
IEEE Transactionson Automatic Control , 22(5):728–740, 1977.[32] Grace Wahba. Problem 65-1: A least squares estimate of satellite attitude.
SIAM review , 7(3):409–409, 1965.[33] Richard Hartley, Jochen Trumpf, Yuchao Dai, and Hongdong Li. Rotation averaging.
InternationalJournal of Computer Vision , 103(3):267–305, 2013.[34] J. D. Hol.