A Particle Filtering Framework for Integrity Risk of GNSS-Camera Sensor Fusion
AA Particle Filtering Framework for Integrity Risk of GNSS-Camera SensorFusion
Adyasha Mohanty, Shubh Gupta and Grace Xingxin Gao
Stanford University
BIOGRAPHIES
Adyasha Mohanty is a graduate student in the Department of Aeronautics and Astronautics at Stanford University. She graduatedwith a B.S. in Aerospace Engineering from Georgia Institute of Technology in 2019.Shubh Gupta is a graduate student in the Department of Electrical Engineering at Stanford University. He received his B.Tech degreein Electrical Engineering with a minor in Computer Science from the Indian Institute of Technology Kanpur in 2018.Grace Xingxin Gao is an assistant professor in the Department of Aeronautics and Astronautics at Stanford University. Beforejoining Stanford University, she was faculty at University of Illinois at Urbana-Champaign. She obtained her Ph.D. degree at StanfordUniversity. Her research is on robust and secure positioning, navigation and timing with applications to manned and unmanned aerialvehicles, robotics, and power systems.
ABSTRACT
Adopting a joint approach towards state estimation and integrity monitoring results in unbiased integrity monitoring unlike traditionalapproaches. So far, a joint approach was used in Particle RAIM [1] for GNSS measurements only. In our work, we extend Particle RAIMto a GNSS-camera fused system for joint state estimation and integrity monitoring. To account for vision faults, we derive a probabilitydistribution over position from camera images using map-matching. We formulate a Kullback-Leibler Divergence [2] metric to assessthe consistency of GNSS and camera measurements and mitigate faults during sensor fusion. The derived integrity risk upper boundsthe probability of Hazardously Misleading Information (HMI). Experimental validation on a real-world dataset shows that our algorithmproduces less than 11 m position error and the integrity risk over bounds the probability of HMI with 0.11 failure rate for an 8 m AlertLimit in an urban scenario.
In urban environments, GNSS signals suffer from lack of continuous satellite signal availability, non line-of-sight (NLOS) errors andmulti-path effects. Thus, it is important to quantify the integrity or measure of trust in the correctness of the positioning solution providedby the navigation system. Traditional integrity monitoring approaches [3] provide point positioning estimates i.e. the state estimationalgorithm is assumed to be correct and then the integrity of the estimated position is assessed. However, addressing state estimation andintegrity monitoring separately does not capture the uncertainty in the state estimation algorithm. As a result, the integrity monitoringbecomes biased by the acquired state estimate leading to subsequent faulty state estimation.Recently, an approach towards joint state estimation and integrity monitoring for GNSS measurements was proposed in ParticleRAIM [1]. Instead of producing point positioning estimates, Particle RAIM uses a particle filter to form a multi-modal probability distri-bution over position, represented as particles. Traditional RAIM [4] is used to assess the correctness of different ranging measurementsand the particle weights are updated to form the distribution over the position. From the resulting probability distribution, the integrityrisk is derived using an approximate upper bound to the probability of HMI or the reference risk. By incorporating the correctnessof different measurement subsets directly into the state estimation, Particle RAIM is able to exclude multiple faults in GNSS rangingmeasurements. However, due to large errors from GNSS measurements, Particle RAIM requires employing conservative measures suchas large Alert Limits to adequately bound the reference risk.For urban applications, improved positioning accuracy from Particle RAIM is necessary to provide adequate integrity for smallerAlert Limits. Since measurements from GNSS are not sufficient to provide the desired accuracy, it is helpful to augment GNSS withadditional sensors that increase redundancy in measurements. Sensors such as cameras are effective complimentary sensors to GNSS.In urban regions, cameras have access to rich environmental features [5] [6] [7] and provide superior sensing than GNSS which suffersfrom multi-path and NLOS errors [3] [8] [9] [10].Thus, with added vision, we need a framework to provide integrity for the fused GNSS-camera navigation system to account for twocategories of faults. The first category includes data association errors across images, where repetitive features are found in multipleimages creating ambiguity during feature and image association. This ambiguity is further amplified due to variations in lighting andenvironmental conditions. The second category comprises errors that arise during sensor fusion of GNSS and camera measurements.1 a r X i v : . [ c s . R O ] J a n nsuring that faults in either measurement do not dominate the sensor fusion process is paramount for maximizing the complimentarycharacteristics of GNSS and camera.Many works provide integrity for GNSS-camera fused systems utilizing a Kalman Filter [11] framework or an information filter [12].Vision Aided-RAIM [13] introduced landmarks as pseudo-satellites and integrated them into a linear measurement model alongside GPSobservations. In [14], the authors implemented a sequential integrity monitoring approach to isolate single satellite faults. The integritymonitor uses the innovation sequence output from a single Kalman filter to derive a recursive expression of the worst case failure modeslopes and to compute the protection levels (PL) in real-time. An Information Filter (IF) is used in [15] for data fusion wherein faults aredetected based on the Kullback-Leibler divergence (KL divergence) [2] between the predicted and the updated distributions. After alldetected faulty measurements are removed, the errors are modeled by a student’s t distribution to compute a PL. A student’s t distributionis also used in [16] alongside informational sensor fusion for fault detection and exclusion. The degree of the distribution is adapted inreal-time based on the computed residual from the information filter. A distributed information filter is proposed in [17] to detect faultsin GPS measurement by checking the consistency through log-likelihood ratio of the information innovation of each satellite. Theseapproaches model measurement fault distributions with a Gaussian distribution although for camera measurements, the true distributionmight be non-linear, multi-modal, and arbitrary in nature. Using a simplified linear measurement probability distribution renders theseframeworks infeasible and unreliable for safety-critical vision augmented GNSS applications.Another line of work builds on Simultaneous Localization and Mapping (SLAM) based factor graph optimization techniques.Bhamidipati et al [5] derived PL by modeling GPS satellites as global landmarks and introducing image pixels from a fish-eye camera asadditional landmarks. The raw image is categorized into sky and non-sky pixels to further distinguish between LOS and NLOS satellites.The overall state is estimated using graph optimization along with an M-estimator. Although this framework is able to exclude multiplefaults in GPS measurements, it is not extendable to measurements from forward or rear facing cameras that do not capture sky regions.Along similar lines, measurements from a stereo camera along with GNSS pseudoranges are jointly optimized in a graph optimizationframework in [18]. GNSS satellites are considered as feature vision points and pose-graph SLAM is applied to achieve a positioningsolution. However, graph optimization approaches also share the same limitation as Kalman Filter based approaches: They producepoint positioning estimates and do not account for the uncertainty in state estimation that biases integrity monitoring.Overall, existing integrity monitoring algorithms for GNSS- camera fusion have the following limitations:1 They address state estimation and integrity monitoring separately, similar to traditional RAIM approaches.2 They accommodate camera measurements within a linear or linearizable framework such as KF, EKF, or IF and become infeasiblewhen camera measurements are not linearizable without loss of generality.3 There is no standard way in literature to quantify the uncertainty in camera measurements directly from raw images.4 They use outlier rejection techniques to perform fault detection and exclusion after obtaining the positioning solution. There is noframework that accounts for faults both independently in GNSS and camera as well as the faults that arise during sensor fusion.In our work, we overcome the above limitations by proposing the following contributions. This paper is based on our recent IONGNSS+ 2020 conference paper [19].1 We jointly address state estimation and integrity monitoring for GNSS-camera fusion with a particle filtering framework. Weretain the advantages of Particle RAIM while extending it to include camera measurements.2 We derive a probability distribution over position directly from images leveraging image registration.3 We develop a metric based on KL divergence [2] to fuse probability distributions obtained from GNSS and camera measurements.By minimizing the KL divergence of the distribution from each camera measurement with respect to the GNSS measurementdistribution, we ensure that erroneous camera measurements do not affect the overall probability distribution. Stated otherwise, thedivergence metric augments the shared belief over the position from both sensor measurements by minimizing cross-contaminationduring sensor fusion.4 We experimentally validate our framework on an urban environment dataset [20] with faults in GNSS and camera measurements.The rest of the paper is organized as follows. In Section 2, we describe the overall particle filter framework for probabilistic sensorfusion. In Sections 3 and 4, we infer a distribution over position from GNSS and camera measurements, respectively. Section 5 elaborateson the probabilistic sensor fusion of GNSS and camera measurements along with the proposed KL divergence metric. In Section 6, wedescribe the integrity risk bounding. Sections 7 and 8 shows the experimental setup and the results from experimental validation on theurban environment dataset, respectively. In Section 9, we conclude our work. The distribution over the position inferred from GNSS and camera measurements is multi-modal due to faults in a subset of measure-ments. Thus, to model such distributions, we choose a particle filtering approach that further allows us to keep track of multiple positionhypotheses rather than a single position estimate. Although a particle filtering approach was used in Particle RAIM [1], the authors only2onsidered GNSS ranging measurements. In our work, we extend the framework to include measurements from a camera sensor. Figure1 represents our overall framework. We add the camera and probabilistic sensor fusion modules to the framework proposed in [1].
Figure 1: Particle filter framework with probabilistic sensor fusion of GNSS and camera measurements and integrity risk bounding. The highlightedmodules represent our contributions.The GNSS and Risk Bounding Modules are adopted from Particle RAIM [1].
Our framework consists of the following modules:• Perturbation and propagation: Using noisy inertial odometry from the IMU, we generate a set of motion samples, each of whichperturbs the previous particle distribution in the propagation step.• GNSS module: This module from Particle RAIM [1] takes GNSS ranging measurements from multiple satellites, some of whichmay be faulty and outputs a probability distribution over position using a fault-tolerant weighting scheme described in Section 3.The particles from the GNSS module are propagated to the camera module to ensure that the distributions from GNSS and camerashare the same domain of candidate positions.• Camera module and synchronization with motion data: The camera module takes a camera image and matches it to the imagesin a map database using image registration to generate similarity scores. The underlying state of the best matched image isextracted and propagated forward to the current GNSS time stamp by interpolating with IMU odometry. This step ensures thatthe probability distributions from camera and GNSS measurements are generated at the same time stamps. Finally, we usea categorical distribution function to transform the similarity scores into a probability distribution over position hypotheses asdescribed in Section 4.• Probabilistic sensor fusion: This module outputs a joint likelihood over positions from GNSS and camera measurements afterfusing them with the proposed KL divergence metric in Section 5.1. Particles are resampled from the current distribution withSequential Importance Resampling [21].• Risk bounding: We adopt the risk bounding formulation proposed in [1] to compute the integrity risk from the derived probabilitydistribution over the position domain. Using generalization bounds from statistical learning theory [22], the derived risk bound isformally shown to over bound the reference risk in Section 6.We elaborate on the various modules of our framework in the following sections.
A likelihood model for the GNSS measurements is derived using the mixture weighting method proposed in Particle RAIM [1].Instead of assuming correctness of all GNSS measurements, the likelihood is modeled as a mixture of Gaussians to account for faultsin some measurements. Individual measurement likelihoods are modeled as Gaussians with the expected pseudoranges as means andvariance based on Dilution of Precision(DoP). The GMM [23] [24] is expressed as: L t ( m t ) = R (cid:88) k =0 γ k N ( m tk | µ t,kX , σ t,kX ); R (cid:88) k =0 γ k = 1 , (1)where L t ( m t ) denotes the likelihood of measurement m at time t . γ denotes the measurement responsibility or the weights of theindividual measurement components and R refers to the total number of GNSS ranging measurements. µ and σ represent the mean and3he standard deviation of each Gaussian component inferred from DOP. X refers to the collection of position hypotheses denoted byparticles and k is the index of the number of Gaussians in the mixture. The weights are inferred with a single step of the Expectation-Maximization (EM) scheme [25] as shown in Figure 2. Figure 2: Two steps of the EM scheme used to derive the weight of each Gaussian likelihood in the GMM. In the expectation step, the local vote foreach particle is computed based on the squared-normal voting on the normalized residual for a particle obtained with traditional RAIM. The overallconfidence is inferred by normalizing the votes and pooling them using Bayesian maximum a posteriori (MAP) estimation.
To avoid numerical errors due to finite precision, the log likelihood of the likelihood model is implemented by extending the inputspace to include additional copies of the state space variable, one for each GNSS measurement [26]. The new likelihood is written as: P (cid:0) m t (cid:12)(cid:12) X t , χ = k (cid:1) = γ k N (cid:0) m tk (cid:12)(cid:12) µ t,kx , σ t,kx (cid:1) ; R (cid:88) k =1 γ k = 1 , (2)where χ is an index that denotes the associated GNSS measurement with the particle replica. To quantify the uncertainty from camera images, we use a map-matching algorithm that matches a camera image directly to an imagepresent in a map database. Our method is implemented in OpenCV [27] and comprises three steps shown in Figure 3.
Figure 3: Generating probability distribution over position from camera images.
Each block is elaborated below.• Database Creation: We assume prior knowledge of the geographical region where we are navigating. Based on GPS coordinates,we select images from the known area using Google Street View Imagery. These images along with their associated coordinatesform the database. Features are extracted from these images and stored in a key point-descriptor format.• Image Registration: After receiving a camera test image, we extract features and descriptors with the ORB [28] algorithm. Al-though we experimented with other feature extraction methods such as SIFT [29], SURF [30], and AKAZE [31], ORB was foundmost effective for extracting descriptors from highly blurred images. The descriptor vectors are clustered with a k-means algorithm[32] to form a vocabulary tree [33]. Each node in the tree corresponds to an inverted file, i.e., a file containing the ID-numbers ofimages in which a particular node is found and the relevance of each feature to that image. The database is then scored hierarchi-cally based on Term Frequency Inverse Document Frequency (TF-IDF) scoring [33], which quantifies the relevance of the imagesin the database to the camera image. We refer to these scores as the similarity scores. The image with the highest score is chosenas the best match and the underlying state is extracted.• Probability generation after synchronization: After extracting the state from the best camera image in the database, we propagatethe state to the same time stamp as the GNSS measurement. The raw vehicle odometry is first synchronized with GNSS measure-ments using the algorithm in [20]. Using the time difference between the previous and current GNSS measurements, we linearlyinterpolate the extracted state with IMU motion data as shown below. x t = x t − + v t − dt + 0 . a t − dt (3)4here x t refers to the 3D position at epoch t , dt refers to the time difference between successive camera measurements, and v and a are the interpolated IMU velocity and accelerations at epoch t .Next, we compute the Euclidean distance between the interpolated state and the current particle distribution from GNSS mea-surements to obtain new similarity scores. This step ensures that the probability distributions computed from camera and GNSSmeasurements share the same domain of candidate positions. A SoftMax function takes the scores and outputs a probabilitydistribution over position. Normalization of the scores enforces a unit integral for the distribution. Q ( n t | X t ) = exp( ω t ) (cid:80) c exp( ω tc ) (4)where Q is the probability distribution associated with camera measurement n at time t over the position domain X , ω tc representscomputed distance score, and c is the index for individual particles. After obtaining the probability distributions from GNSS and camera, we need to form a joint distribution over the position. However,we need to ensure that faults in camera measurements do not degrade the distribution from GNSS measurements, one that is coarse butcorrect since the distribution accounts for faults in the ranging measurements through the RAIM voting scheme. Thus, we need a metricto identify and exclude faulty camera measurements leveraging knowledge of the distribution from GNSS. Additionally, the metricshould assess the consistency of the probability distribution from each camera measurement with respect to the GNSS distribution andmitigate inconsistent distributions that result from vision faults. The KL divergence [34] represents one way to assess the consistency oftwo probability distributions. By minimizing the divergence between the distributions inferred from camera and GNSS, we ensure thatboth distributions are consistent.
We provide a background on KL divergence prior to explaining our metric.The KL divergence [34] between two discrete probability distributions, p and q , in the same domain is defined as: D KL ( p || q ) = (cid:88) z ∈ ζ p z log p z q z (5)where ζ represents the domain of both distributions and z is each element of the domain. In our work, we ensure that distributionsfrom GNSS and camera share the same position domain by propagating the particles from the GNSS distribution to the camera moduleprior to generating the distribution from camera measurements. Two important properties of the KL divergence are:• The KL divergence between two distributions is always non-negative and not symmetrical [34] D KL ( p || q ) (cid:54) = D KL ( q || p ) (6)where D KL ( q || p ) is the reverse KL divergence between the distributions p and q .• D KL ( p || q ) is convex in the pair ( p || q ) if both distributions represent probability mass functions (pmf) [34].Leveraging the above properties, we formulate our metric below.• Mixture of Experts (MoE): We form a mixture distribution to represent probability distributions from successive camera measure-ments, where a non-Gaussian probability distribution is derived from a single camera image. Each measurement is assigned aweight to represent its contribution in the mixture. Instead of setting arbitrary weights, we leverage the GNSS distribution to inferweights that directly correspond to whether a camera measurement is correct or faulty. Thus, highly faulty camera measurementsare automatically assigned low weights in the MoE. The mixture distribution is given as: Q ∗ ( n t | X t ) = K (cid:88) j =1 α ∗ j Q j ( n tj | X t ); K (cid:88) j =1 α ∗ j = 1 (7)where Q ∗ ( n t | X t ) represents the mixture distribution formed using K camera images between two successive GNSS time epochs. Q j ( n tj | X t ) is the likelihood of a single camera image n tj recorded at time t with α ∗ j as the normalized weight. X t are the particlesrepresenting position hypothesis and j is the index for the camera images. The weights are normalized below to ensure that theMoE forms a valid probability distribution: α ∗ j = α jK (cid:80) r =1 α r (8)where α ∗ j is the normalized weight, α j is the weight prior to normalization, r is the index for the number of camera imagesbetween two successive GNSS time epochs, and K is the total number of camera measurements.5 Setup KL divergence: We set up a divergence minimization metric between the distributions from each camera measurement andall GNSS measurements. KL j (( α j Q j (cid:0) n tj (cid:12)(cid:12) X t (cid:1) || P (cid:0) m tk (cid:12)(cid:12) X t , χ = k (cid:1) ) = S (cid:88) i =1 (cid:0) α j Q j ( n tj | X t ) (cid:1) log (cid:34) (cid:0) α j Q j ( n tj | X t ) (cid:1) P ( m tk | X t , χ = k ) (cid:35) (9)where || denotes the divergence between both probability distributions, S represents the total number of particles or positionhypotheses across both distributions, and i is the index for the particles. P ( m tk | X t , χ = k ) is the probability distribution atepoch t from GNSS measurements as defined in Equation (2), α j is the unnormalized weight, and j is the index for the camerameasurement.• Minimize divergence: Using the convexity of the KL divergence (Property 2), we minimize each divergence metric with respectto the unknown weight assigned to the likelihood of each camera measurement. We abbreviate P ( m ti | X t , χ = i ) as P ( x i ) and Q (cid:0) n tj (cid:12)(cid:12) X t (cid:1) as Q ( x i ) for brevity and expand Equation (9). Since α j is independent of the summation index, we keep it outsidethe summation and simplify our expansion below. KL j ( Q | | P ) = α j S (cid:88) i = 1 Q ( x i ) log α j + α j S (cid:88) i = 1 Q ( x i ) log Q ( x i ) − α j S (cid:88) i = 1 Q ( x i ) log P ( x i ) (10)Taking the first derivative with respect to α j we obtain, min α j KL j ( Q | | P ) = log α j S (cid:88) i = 1 Q ( x i ) + S (cid:88) i = 1 Q ( x i ) + S (cid:88) i = 1 Q ( x i ) log Q ( x i ) − S (cid:88) i = 1 Q ( x i ) log P ( x i ) (11)Equating the expression on the right to 0 and solving for α j gives us: α j = e k ; k = (cid:80) Si =1 Q ( x i ) log P ( x i ) Q ( x i ) (cid:80) Si =1 Q ( x i ) − (12)We also perform a second derivative test to ensure that the α j value inferred is a minimum value of the divergence measure.Since the exponential function with the natural base is always positive, α j is always positive as well. Thus, evaluating the secondderivative gives us a positive value. α j S (cid:88) i =1 Q ( x i ) > (13)• Joint probability distribution over position: After obtaining the weights, we normalize them using Equation (8). We obtain thejoint distribution assuming that the mixture distribution from camera measurements and the GMM from GNSS measurements aremutually independent. The joint distribution is given as: P ∗ (cid:0) n t , m t (cid:12)(cid:12) X t (cid:1) = P (cid:0) m ti (cid:12)(cid:12) X t , χ = k (cid:1) Q ∗ (cid:0) n t (cid:12)(cid:12) X t (cid:1) (14)where P ( m tk | X t , χ = k ) is the probability distribution from GNSS measurements in Equation (2). We take the log likelihoodof the joint distribution to avoid finite precision errors. We upper bound the probability of HMI using the risk bounding framework introduced in [1]. For a single epoch, the probability ofHMI for a given Alert Limit r is defined as: R x ∗ ( π ) = E x ∼ π [ P ( (cid:107) x − x ∗ (cid:107) ≥ r )] (15)where R x ∗ ( π ) is the probability of HMI with reference position x ∗ and mean distribution in position space induced by all posteriordistributions π . The distributions are created by generating samples around the measured odometry and then perturbing the initialparticle distribution. From the PAC-Bayesian [35] formulation and as shown in [1], the reference risk R ( π t ) upper bound is: R ( π t ) ≤ R M ( π t ) + D − Ber ( R M ( π t ) , (cid:15) ) (16)The first and second terms refer to empirical and divergence risk, respectively. We explain the computation of each term below.The empirical risk R M ( π t ) is computed from a finite set of perturbed samples of size M . R M ( π t ) = 1 M M (cid:88) i =1 E x ∼ π t [ l ( x, π tu )] , (17)6here, l ( x, π tu ) is the classification loss with respect to a motion sample resulting in the posterior distribution being classified as haz-ardous. π refers to the mean posterior distribution at time t .The divergence risk term D − Ber ( R M ( π t ) , (cid:15) ) accounts for uncertainty due to perturbations that are not sampled. First, we computethe gap term (cid:15) using KL divergence [2] of the current distribution from the prior and a confidence requirement in the bound δ . (cid:15) = 1 M ( KL ( π t || π t − ) + log ( M + 1 δ )) (18)where δ refers to the bound gap. The means of the prior and current distributions are taken as π t − and π t . The prior and currentdistributions are approximated as multivariate Gaussian distributions.The Inverse Bernoulli Divergence [1] D − Ber is defined as: D − Ber ( q, (cid:15) ) = t s.t. D Ber ( q || q + t ) = (cid:15) (19)where q || q + t is the KL divergence [2] between q and q + t and q is given by the empirical risk term. Finally, the Inverse BernoulliDivergence [1] is obtained approximately as: D Ber ( q, (cid:15) ) = (cid:115) (cid:15) q + − q (20) We test our framework on a 2.3 km long urban driving dataset from Frankfurt [20]. We use GNSS pseudorange measurements,images from a forward-facing camera, ground truth from a NovAtel receiver, and odometry from the IMU. The dataset contains NLOSerrors in GNSS measurements and vision faults due to variations in illumination. In addition to the real-world dataset, we create emulateddatasets by inducing faults in GNSS and vision measurements with various controlled parameters. • Real-world dataset: We use GNSS ranging measurements with NLOS errors. For simplicity, we estimate the shared clock bias bysubtracting the average residuals with respect to ground truth from all GNSS pseudoranges at one time epoch.• Emulated dataset: First, we vary the number of satellites with NLOS errors by adding back the residuals to randomly selectedsatellites. This induces clock errors in some measurements which are perceived as faults. Secondly, we remove the NLOS errorsfrom all measurements but add Gaussian bias noise to pseudorange measurements from random satellites at random time instances.The number of faults are varied between 2-9 out of 12 available measurements at any given time step. We induce faults in camerameasurements by adding blurring with a 21x21 Gaussian kernel and occlusions of 25-50 % height and width to random images.During the experimental simulation, a particle filter tracks the 3D position (x,y,z) of the car and uses faulty GNSS and camerameasurements along with noisy odometry. Probability distributions are generated independently from GNSS and camera and fused withthe KL divergence metric to form the joint distribution over positions. At each time epoch, the particle distribution with the highest totallog-likelihood is chosen as the estimated distribution for that epoch. The integrity risk is computed from 10 posterior distributions of theinitial particle distribution and the reference risk is computed with ground truth. Our experimental parameters are listed in Table 1.
Table 1: Experimental Parameters for Validation with Real-world and Emulated Datasets
Parameter Value Parameter ValueNo. of GNSS measurements 12 Added Gaussian bias to GNSS measurements 20- 200 mNo. of faults in GNSS measurements 2-9 No. of particles 120Measurement noise variance 10 m Filter propagation variance 3 m Alert Limit 8, 16 m No. of odometry perturbations 10
We use Particle RAIM as the baseline to evaluate our algorithm’s performance for state estimation. The metric for state estimationis the root mean square error (RMSE) of the estimated position with respect to ground truth for the entire trajectory. The risk boundingperformance is evaluated with metrics derived from a failure event, i.e., when the derived risk bound fails to upper bound the referencerisk. The metrics are the following: failure ratio(the fraction of cases where the derived risk bound fails to upper bound the referencerisk), failure error(the mean error during all failure events), and the bound gap(average gap between the derived integrity risk) and thereference risk. 7or evaluating the integrity risk, we specify a performance requirement that the position should lie within the Alert Limit with atleast 90% probability. A fault occurs if the positioning error exceeds the Alert Limit. The metrics for integrity risk are reported basedon when the system has insufficient integrity or sufficient integrity [36], which respectively refer to the states when a fault is declared ornot. The false alarm rate equals the fraction of the number of times the system declares insufficient integrity in the absence of a fault.The missed identification rate is defined as the fraction of the number of times the system declares sufficient integrity even though a faultis present.
First, we test our algorithm with NLOS errors in GNSS ranging measurements and added camera faults. Quantitative results in Table2 demonstrate that our algorithm produces 3D positioning estimates with overall RMSE of less than 11 m. Additionally, our algorithmreports lower errors compared to Particle RAIM for all test cases. Our algorithm is able to compensate for the residual errors fromParticle RAIM by including camera measurements in the framework. This leads to improved accuracy in the positioning solution.
Table 2: RMSE in 3D Position with NLOS errors and added vision faults
No. of faults out of 12 available GNSS measurements Particle RAIM-Baseline (meter) Our Algorithm (meter)2 18.1 6.34 19.1 6.16 16.9 5.99 26.6 10.6For qualitative comparison, we overlay the trajectories from our algorithm on ground truth and highlight regions with positioningerror of greater than 10 m in Figures 4 and 5. Trajectories from Particle RAIM show large deviations from ground truth in certainregions, either due to poor satellite signal availability or high NLOS errors in the faulty pseudorange measurements. However, similardeviations are absent from the trajectories from our algorithm which uses both GNSS and camera measurements. Our KL divergencemetric is able to mitigate the errors from vision and the errors from cross-contamination during sensor fusion, allowing us to producelower positioning error. (a) Particle RAIM (Baseline) (b) Our Algorithm
Figure 4: State estimation under NLOS errors for 6 faulty GNSS pseudo range measurements and added vision faults. Regions with positioning errorgreater than 10 m are highlighted in red. a) Particle RAIM (Baseline) (b) Our Algorithm Figure 5: State estimation under NLOS errors for 9 faulty GNSS pseudo range measurements and added vision faults. Regions with positioning errorgreater than 10 m are highlighted in red.
Secondly, we test our algorithm with the emulated datasets. Quantitatively, we plot the RMSE as a function of the added Gaussianbias value in Figure 6 and as a function of the number of faulty GNSS ranging measurements in Figure 7. For all validation cases, ouralgorithm produces an overall RMSE less than 10 m. Similar to the results from the real-world dataset, our algorithm reports lowerRMSE values than Particle RAIM. With a fixed number of faults, the errors generally increase with increasing bias. At a fixed biasvalue, the errors decrease with increasing number of faults up to 6 faulty GNSS measurements since large number of faults are easilyexcluded by Particle RAIM producing an improved distribution over the position. The improved distribution from GNSS further enablesthe KL divergence metric to exclude faulty camera measurements and produce a tighter distribution over the position domain. However,with a higher number of faults, Particle RAIM does not have enough redundant correct GNSS measurements to exclude the faultymeasurements resulting in higher positioning error. Nevertheless, with added vision, our algorithm produces better positioning estimatesfor all test cases than Particle RAIM.
Figure 6: RMSE from our algorithm and Particle RAIM (baseline) for varying numbers of faults in GNSS ranging measurements at a fixed addedGaussian bias value.Figure 7: RMSE from our algorithm and Particle RAIM (baseline) for various added Gaussian bias values with fixed number of faulty GNSS measure-ments. .2 Integrity Monitoring We evaluate the integrity risk bounding performance for two Alert Limits, 8 m and 16 m. For an Alert Limit of 8 m, Table 3 showsthat the derived integrity risk satisfies the performance requirement with very low false alarm and missed identification rates. While thefalse alarm rates reported are 0 for all test cases except two and the missed identification rates are always less than 0.11. Additionally,the integrity risk bound upper bounds the reference risk with a failure ratio of less than 0.11 and a bound gap of less than 0.4 for allcases. Figures 8 and 9 further support the observation that the derived risk bound is able to over bound the reference risk with lowfailure rate for the same Alert Limit. The few instances when the derived risk bound fails to upper bound the reference risk occur dueto large sudden jumps in the reference risk that go undetected considering the fixed size of our motion samples. However, in general,the integrity risk produced from our algorithm is able to satisfy the desired performance requirement and successfully overbound thereference risk for an Alert Limit as small as 8 m. This choice of Alert Limit is allowed because of the low positioning errors that furtherenable non-conservative integrity risk bounds.
Table 3: Integrity Risk for Alert Limit of 8 m
Added Bias Value (meter) No. of Faults P F A P MI Failure Ratio Failure Error (meter) Bound Gap100 2 0 0.03 0.07 7.5 0.26100 4 0 0.04 0.04 2.3 0.25100 6 0 0.07 0.11 2.9 0.25100 9 0.07 0.03 0.07 4.7 0.36200 2 0 0.07 0.07 3.5 0.20200 4 0.11 0 0.04 4.8 0.40200 6 0 0 0 - 0.38200 9 0 0.07 0.04 5.4 0.36
Figure 8: Reference risk and integrity risk bound with 8 m Alert Limit for varying numbers of faults and added bias of 100 m in GNSS measurements.The derived risk bound over bounds the reference risk with less than 0.11 failure ratio for all test cases.Figure 9: Reference risk and integrity risk bound with 8 m Alert Limit for varying numbers of faults and added bias of 200 m in GNSS measurements.The derived risk bound over bounds the reference risk with less than 0.07 failure ratio for all test cases.
For an Alert Limit of 16 m, Table 4 shows that the integrity risk satisfies the integrity performance requirement with 0 false alarmrates. Furthermore, the missed identification rates are always 0 except for the test case with 9 faults and 100 m added bias. Specifying a10arger Alert Limit lowers the risk associated with the distribution over position since almost all particles from the perturbed distributionslie within the Alert Limit. Thus, the integrity risk with a 16 m Alert Limit is reported to be much smaller compared to the risk obtainedwith a 8 m Alert Limit as shown in Figures 8 and 9. Additionally, the derived risk bound produces even lower failure ratio of less than0.07 and a tighter bound gap of less than 0.1. Overall, the derived risk bound over bounds the reference risk for various bias and faultscenarios in Figures 10 and 11.
Table 4: Integrity Risk for Alert Limit of 16 m
Added Bias Value (meter) No. of Faults P F A P MI Failure Ratio Failure Error (meter) Bound Gap100 2 0 0 0 - 0.10100 4 0 0 0 - 0.08100 6 0 0 0.04 5.9 0.05100 9 0 0.04 0.07 9.7 0.08200 2 0 0 0.07 5.0 0.09200 4 0 0 0.07 4.2 0.07200 6 0 0 0 3.6 0.06200 9 0 0 0.04 3.8 0.01
Figure 10: Reference risk and integrity risk bound with 16 m Alert Limit for varying numbers of faults and added bias of 100 m in GNSS measurements.The derived risk bound over bounds the reference risk with less than 0.07 failure ratio for all test cases.Figure 11: Reference risk and integrity risk bound with 16 m Alert Limit for varying numbers of faults and added bias of 200 m GNSS measurements.The derived risk bound over bounds the reference risk with less than 0.07 failure ratio for all test cases.
In this paper, we presented a framework for joint state estimation and integrity monitoring for a GNSS-camera fused system usinga particle filtering approach. To quantify the uncertainty in camera measurements, we derived a probability distribution directly fromcamera images leveraging a data-driven approach along with image registration. Furthermore, we designed a metric based on KLdivergence to probabilistically fuse measurements from GNSS and camera in a fault-tolerant manner. The metric accounts for visionfaults and mitigates the errors that arise due to cross-contamination of measurements during sensor fusion. We experimentally validatedour framework on real-world data under NLOS errors, added Gaussian bias noise to GNSS measurements, and added vision faults.11ur algorithm reported lower positioning error compared to Particle RAIM which uses only GNSS measurements. The integrity riskfrom our algorithm satisfied the integrity performance requirement for Alert Limits of 8 m and 16 m with low false alarm and missedidentification rates. Additionally, the derived integrity risk successfully provided an upper bound to the reference risk with a low failurerate for both Alert Limits, making our algorithm suitable for practical applications in urban environments.
10 ACKNOWLEDGMENT
We express our gratitude to Akshay Shetty, Tara Mina and other members of the Navigation of Autonomous Vehicles Lab for theirfeedback on early drafts of the paper.
References [1] S. Gupta and G. X. Gao, “Particle raim for integrity monitoring,”
Proceedings of the 32nd International Technical Meeting of theSatellite Division of The Institute of Navigation, ION GNSS+ 2019 , 2019.[2] H. Zhu, “On information and sufficiency,” 04 1997.[3] N. Zhu, J. Marais, D. B´etaille, and M. Berbineau, “Gnss position integrity in urban environments: A review of literature,”
IEEETransactions on Intelligent Transportation Systems , vol. 19, no. 9, pp. 2762–2778, 2018.[4] Y. C. Lee, “Analysis of range and position comparison methods as a means to provide gps integrity in the user receiver,”
Proceedingsof the 32nd International Technical Meeting of the Satellite Division of The Institute of Navigation) , vol. 19, no. 9, pp. 1–4, 1986.[5] S. Bhamidipati and G. Gao, “Slam-based integrity monitoring using gps and fish-eye camera,”
Proceedings of the 32nd Interna-tional Technical Meeting of the Satellite Division of The Institute of Navigation, ION GNSS+ 2019 , pp. 4116–4129, 10 2019.[6] Z. Wang, Y. Wu, and Q. Niu, “Multi-sensor fusion in automated driving: A survey,”
IEEE Access , vol. 8, pp. 2847–2868, 2020.[7] J. Rife, “Collaborative vision-integrated pseudorange error removal: Team-estimated differential gnss corrections with no station-ary reference receiver,”
IEEE Transactions on Intelligent Transportation Systems , vol. 13, no. 1, pp. 15–24, 2012.[8] He Chengyan, Guo Ji, Lu Xiaochun, and Lu Jun, “Multipath performance analysis of gnss navigation signals,” pp. 379–382, 2014.[9] S. M. Steven Miller, X. Zhang, and A. Spanias,
Multipath Effects in GPS Receivers: A Primer . 2015.[10] K. Ali, X. Chen, F. Dovis, D. De Castro, and A. J. Fern´andez, “Gnss signal multipath error characterization in urban environmentsusing lidar data aiding,” pp. 1–5, 2012.[11] R. E. Kalman, “A new approach to linear filtering and prediction problems,”
Transactions of the ASME–Journal of Basic Engineer-ing , vol. 82, no. Series D, pp. 35–45, 1960.[12] X. Wang, N. Cui, and J. Guo, “Information filtering and its application to relative navigation,”
Aircraft Engineering and AerospaceTechnology , vol. 81, pp. 439–444, 09 2009.[13] L. Fu, J. Zhang, R. Li, X. Cao, and J. Wang, “Vision-aided raim: A new method for gps integrity monitoring in approach andlanding phase,”
Sensors (Basel, Switzerland) , vol. 15, pp. 22854–73, 09 2015.[14] C. Tanil, S. Khanafseh, M. Joerger, and B. Pervan, “Sequential integrity monitoring for kalman filter innovations-based detectors,”
Proceedings of the 32nd International Technical Meeting of the Satellite Division of The Institute of Navigation, ION GNSS+ 2018 ,10 2018.[15] J. Al Hage and M. E. El Najjar, “Improved outdoor localization based on weighted kullback-leibler divergence for measurementsdiagnosis,”
IEEE Intelligent Transportation Systems Magazine , pp. 1–1, 2018.[16] J. Al Hage, P. Xu, and P. Bonnifait, “Bounding localization errors with student’s distributions for road vehicles,”
InternationalTechnical Symposium on Navigation and timing , 11 2018.[17] N. A. Tmazirte, M. E. E. Najjar, C. Smaili, and D. Pomorski, “Multi-sensor data fusion based on information theory. applicationto gnss positionning and integrity monitoring,” , pp. 743–749, 2012.[18] Z. Gong, P. Liu, Q. Liu, R. Miao, and R. Ying, “Tightly coupled gnss with stereo camera navigation using graph optimization,”
Proceedings of the 32nd International Technical Meeting of the Satellite Division of The Institute of Navigation, ION GNSS+ 2018 ,pp. 3070–3077, 10 2018.[19] A. Mohanty, S. Gupta, and G. X.Gao, “A particle filtering framework for integrity risk of gnss-camera sensor fusion,”
Proceedingsof the 33 nd International Technical Meeting of the Satellite Division of The Institute of Navigatin, ION GNSS+ 2020 , 2020.1220] P. Reisdorf, T. Pfeifer, J. Breßler, S. Bauer, P. Weissig, S. Lange, G. Wanielik, and P. Protzel, “The problem of comparable gnssresults – an approach for a uniform dataset with low-cost and reference data,” in
The Fifth International Conference on Advances inVehicular Systems, Technologies and Applications (M. Ullmann and K. El-Khatib, eds.), vol. 5, p. 8, nov 2016. ISSN: 2327-2058.[21] F. Gustafsson, F. Gunnarsson, N. Bergman, U. Forssell, J. Jansson, R. Karlsson, and P. . Nordlund, “Particle filters for positioning,navigation, and tracking,”
IEEE Transactions on Signal Processing , vol. 50, no. 2, pp. 425–437, 2002.[22] S. B. O. Bousquet and G. Lugosi, “Introduction to statistical learning theory,”
Advanced Lectures on Machine Learning , vol. 3176,pp. 169–207, 2004.[23] M. Simandl and J. Dunik, “Design of derivative-free smoothers and predictors,” , 03 2006.[24] H. W. Sorenson and D. L. Alspach, “Recursive bayesian estimation using gaussian sums,”
Automatica , 1971.[25] J. P. Vila and P. Schniter, “Expectation-maximization gaussian-mixture approximate message passing,”
IEEE Transactions onSignal Processing , vol. 61, no. 19, pp. 4658–4672, 2013.[26] C. M. Bishop,
Pattern recognition and machine learning . Information science and statistics, New York, NY: Springer, ”2006”.[27] G. Bradski, “The opencv library,”
Dr. Dobb’s Journal of Software Tools , 2000.[28] E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “Orb: An efficient alternative to sift or surf,”
International Conference onComputer Vision , pp. 2564–2571, 2011.[29] D. Lowe, “Distinctive image features from scale-invariant keypoints,”
International Journal of Computer Vision , vol. 60, pp. 91–110, 11 2004.[30] H. Bay, T. Tuytelaars, and L. Van Gool, “Surf: Speeded up robust features,”
European Conference on Computer Vision , vol. 3951,pp. 404–417, 07 2006.[31] P. F. Alcantarilla, J. Nuevo, and A. Bartoli, “Fast explicit diffusion for accelerated features in nonlinear scale spaces,”
BritishMachine Vision Conference , 2013.[32] S. P. Lloyd, “Least squares quantization in pcm,”
Information Theory, IEEE Transactions , vol. 28.2, pp. 129–137, 1982.[33] D. Nister and H. Stewenius, “Scalable recognition with a vocabulary tree,”
IEEE Computer Society Conference on Computer Visionand Pattern Recognition , vol. 2, pp. 2161–2168, 2006.[34] T. Erven and P. Harremo¨es, “R´enyi divergence and kullback-leibler divergence,”
Information Theory, IEEE Transactions on , vol. 60,pp. 3797–3820, 2014.[35] L. G. Valiant, “A theory of the learnable,”
Commun. ACM , vol. 27, p. 1134–1142, Nov. 1984.[36] H. Pesonen, “A framework for bayesian receiver autonomous integrity monitoring in urban navigation,”