Automotive-Radar-Based 50-cm Urban Positioning
AAutomotive-Radar-Based 50-cm Urban Positioning
Lakshay Narula, Peter A. Iannucci, Todd E. Humphreys
Radionavigation LaboratoryThe University of Texas at Austin
Austin, TX, USA
Abstract —Deployment of automated ground vehicles (AGVs)beyond the confines of sunny and dry climes will require sub-lane-level positioning techniques based on radio waves rather thannear-visible-light radiation. Like human sight, lidar and camerasperform poorly in low-visibility conditions. This paper developsand demonstrates a novel technique for robust 50-cm-accurateurban ground positioning based on commercially-available low-cost automotive radars. The technique is computationally efficientyet obtains a globally-optimal translation and heading solution,avoiding local minima caused by repeating patterns in the urbanradar environment. Performance is evaluated on an extensiveand realistic urban data set. Comparison against ground truthshows that, when coupled with stable short-term odometry,the technique maintains -percentile errors below
50 cm inhorizontal position and ◦ in heading. Index Terms —Automotive radar, localization, all-weather po-sitioning, automated vehicles
I. I
NTRODUCTION
Development of automated ground vehicles (AGVs) hasspurred research in lane-keeping assist systems, automatedintersection management [1], tight-formation platooning, andcooperative sensing [2], [3], all of which demand accurate(e.g., 50-cm at 95%) ground vehicle positioning in an urbanenvironment. But the majority of positioning techniques de-veloped thus far depend on lidar or cameras, which performpoorly in low-visibility conditions such as snowy whiteout,dense fog, or heavy rain. Adoption of AGVs in many parts ofthe world will require all-weather localization techniques.Radio-wave-based sensing techniques such as radar andGNSS (global navigation satellite system) remain operableeven in extreme weather conditions [4] because their longer-wavelength electromagnetic radiation penetrates snow, fog,and rain. Carrier-phase-differential GNSS (CDGNSS) has beensuccessfully applied for the past two decades as an all-weather decimeter-accurate localization technique in open-skyconditions. Coupling a CDGNSS receiver with a tactical-gradeinertial sensor, as in [5]–[8] delivers robust high-accuracypositioning even during the extended signal outages commonin the urban environment, but such systems are far too ex-pensive for widespread deployment on AGVs. Recent workhas shown that 20-cm-accurate (95%) CDGNSS positioningis possible at low cost even in dense urban areas, but solutionavailability remains below 90%, with occasional long gapsbetween high-accuracy solutions [9]. Moreover, the globaltrend of increasing radio interference in the GNSS bands,whether accidental or deliberate [10], underscores the need for GNSS-independent localization: GNSS jamming cannot beallowed to paralyze an area’s automated vehicle networks.Clearly, there is a need for AGV localization that is lowcost, accurate at the ≈ -cm level, robust to low-visibilityconditions, and continuously available. This paper is the firstto establish that automotive-radar-based localization can meetthese criteria.Mass-market commercialization has brought the cost ofautomotive radar down enough that virtually all current pro-duction vehicles are equipped with at least one radar unit,which serves as the primary sensor for adaptive cruise controland automatic emergency braking. But use of automotive radarfor localization faces the significant challenges of data sparsityand noise: an automotive radar scan has vastly lower resolutionthan a camera image or a dense lidar scan, and is subject tohigh rates of false detection (clutter) and missed detection. Assuch, it is nearly impossible to deduce semantic information orto extract distinctive environmental features from an individualradar scan. This is clear from Fig. 1c, which shows a sparsesmattering of reflections from a single composite scan usingthree radar units. The key to localization is to aggregate se-quential scans into a batch, as in Fig. 1d, where environmentalstructure is clearly evident. Even still, the data remain so sparsethat localization based on traditional machine vision featureextraction and matching is not promising.This paper proposes a two-step process for radar-basedlocalization. The first is the mapping step: creation of a geo-referenced two-dimensional aggregated map of all radar targetsacross an area of interest. Fig. 1b shows such a map, hereafterreferred to as a radar map. The full radar map used throughoutthis paper, of which Fig. 1b is a part, was constructed with thebenefit of a highly stable inertial platform so that a trustworthyground truth map would be available against which mapsgenerated by other techniques could be compared. But anexpensive inertial system or dedicated mobile mapping vehicleis not required to create a radar map. Instead, it can be crowd-sourced from the very user vehicles that will ultimately exploitthe map for localization. During periods of favorable lightingconditions and good visibility, user vehicles can exploit acombination of low-cost CDGNSS, as in [9], and GNSS-aided visual simultaneous localization and mapping, as in [11],to achieve the continuous decimeter-and-sub-degree-accurategeo-referenced position and orientation (pose) required to laydown an accurate radar map. In other words, the radar mapcan be created when visibility is good and then exploited atany later time, such as during times of poor visibility. Copyright c (cid:13) a r X i v : . [ ee ss . SP ] M a y a) (b) (c) (d)Fig. 1. Panel (a) shows a satellite view of the environment being mapped with automotive radar. Panel (b) shows the generated radar map point cloud withvehicle pose obtained from a reference localization system. Note the repeating structure along the road side due to parked vehicles. An individual radar scanobtained during localization is shown in panel (c), along with the red triangle denoting vehicle location and heading. The scan is sparse and contains significantclutter, making it challenging to register to the prior map. Panel (d) shows a batch of radar scans during localization, with the red dots denoting the vehicletrajectory over the past five seconds. The batch captures the underlying structure which can be registered to the prior map. Despite aggregation over multiple vehicle passes, the sparseand cluttered nature of automotive radar data is evident fromthe radar map shown in Fig. 1b: the generated point cloudis much less dense and has a substantially higher fraction ofspurious returns than a typical lidar-derived point cloud, mak-ing automotive-radar-based localization a significantly morechallenging problem.The second step of this paper’s technique is the localizationstep. Using some type of all-weather odometric techniquesuch as inertial sensing, radar odometry, or wheel rotation andsteering encoders—or a combination of these—the estimatedchanges in vehicle pose over a short interval (e.g., ) are usedto spatially organize the multiple radar scans over the intervaland generate what is hereafter referred to as a batch of scans,or batch for short. Fig. 1d shows a five-second batch terminat-ing at the same location as the individual scan in Fig. 1c. Incontrast to the individual scan, some environmental structureemerges in the batch of scans, making robust registration tothe map feasible. Even so, the localization problem remainschallenging due to the dynamic radar environment: note theabsence of parked cars on the left side of the street duringlocalization.
Contributions.
This paper develops a robust compu-tationally efficient correlation-maximization-based globally-optimal radar scan registration algorithm that estimates a two-dimensional translational and a one-dimensional rotationaloffset between a prior radar map and a batch of current scans.Significantly, the technique can be applied to the highly sparseand cluttered data produced by commercially-available low-cost automotive radars. Maximization of correlation is shownto be equivalent to minimization of the L distance betweenthe prior map and the batch probability hypothesis densities.The proposed method relies on the generation of a batch ofradar scans using odometric sensors over short time intervals. An important contribution of this work is to analyze therobustness of the proposed method to short batch lengths andodometric imperfections.This paper also presents the first evaluation of low-costautomotive-grade radar-based urban positioning on the large-scale dataset described in [12]. Data from automotive sensorsare collected over two . driving sessions through the urbancenter of Austin, TX on two separate days specifically chosento provide variety in traffic and parking patterns. Compari-son with a post-processed ground truth trajectory shows thatproposed radar-based localization algorithm has -percentileerrors of
44 cm in horizontal position and . ◦ in headingwhen using drift-free batches. Organization of the rest of this paper.
Sec. II describesthe theoretical underpinnings of the proposed localizationtechnique and explains its modeling assumptions. Sec. IIIcompares and contrasts the proposed approach with the priorwork in related fields. Implementation details concerningcomputational efficiency and low-cost automotive radar sensormodeling are presented in Sec. IV. Experimental results onfield data are detailed and evaluated in Sec. V, and Sec. VIgives concluding remarks.II. L
OCALIZATION T ECHNIQUE
This section describes the theoretical formulation of thelocalization technique adopted in this paper. It first details thestatistical motivation behind the method, and then develops anefficient approximation to the exact solution.
A. Localization using Probability Hypothesis Density
For the purpose of radar-based localization, an AGVs envi-ronment can be described as a collection of arbitrarily shapedradar reflectors in a specific spatial arrangement. Assuming2ufficient temporal permanence of this environment, radar-equipped AGVs make sample measurements of the underlyingstructure over time.
1) The Probability Hypothesis Density Function:
A prob-abilistic description of the radar environment is required toset up radar-based localization as an optimization problem.This paper chooses the probability hypothesis density (PHD)function [13] representation of the radar environment. ThePHD at a given location gives the density of the expectednumber of radar reflectors at that location. For a static radarenvironment, the PHD D ( x ) at a location x ∈ X can bewritten as D ( x ) = I · p ( x ) where X is the set of all locations in the environment, p ( x ) is a probability density function such that (cid:82) p ( x )d x = 1 , and I , the intensity, is the total number of radar reflectors in theenvironment. For a time-varying radar environment, both I and p ( x ) are functions of time. For A ⊂ X , the expected numberof radar reflectors in A is given as I A = (cid:90) A D ( x )d x
2) Estimating Vehicle State from PHDs:
Let D m ( x ) denotethe “map” PHD function representing the distribution of radarreflectors in an environment, estimated as a result of mappingwith known vehicle poses. During localization, the vehiclemakes a radar scan, or a series of consecutive radar scans.A natural solution to the vehicle localization problem may bestated as the vehicle pose which maximizes the likelihood ofthe observed batch of scans, given that the scan was drawnfrom D m ( x ) [14]. This maximum likelihood estimate (MLE)has many desirable properties such as asymptotic efficiency.However, the MLE solution is known to be sensitive to outliersthat may occur if the batch of scans was sampled from aslightly different PHD, e.g., due to variations in the radarenvironment between mapping and localization [15].A more robust solution to the PHD-based localizationproblem may be stated as follows. Let Θ denote the vectorof parameters of the rigid or non-rigid transformation T between the vehicle’s prior belief of its pose, and its truepose. For example, in case of a two-dimensional rigid trans-formation, Θ = [∆ x, ∆ y, ∆ φ ] (cid:62) , where ∆ x and ∆ y denote atwo-dimensional position and ∆ φ denotes heading. Also, let D b ( x (cid:48) ) denote a local “batch” PHD function estimated from abatch of scans during localization, defined over x (cid:48) ∈ A ⊂ X .This PHD is represented in the coordinate system consistentwith vehicle’s prior belief, such that x (cid:48) = T Θ ( x ) . Estimatingthe vehicle pose during localization is defined as estimating Θ such that some distance metric between the PHDs D m ( x ) and D b ( x (cid:48) ) is minimized.This paper chooses the L distance between D m ( x ) and D f ( x (cid:48) ) as the distance metric to be minimized. As comparedto the MLE which minimizes Kullback-Leibler divergence, L minimization trades off asymptotic efficiency for robustness to measurement model inaccuracy [15]. The L distance d L ( Θ ) to be minimized is given as d L ( Θ ) = (cid:90) A ( D m ( x ) − D b ( T Θ ( x ))) d x For rigid two-dimensional transformations, it can be shownas follows that minimizing the L distance between the PHDsis equivalent to maximization of the cross-correlation betweenthe PHDs. (cid:98) Θ = argmin Θ (cid:48) (cid:90) A ( D m ( x ) − D b ( T Θ (cid:48) ( x ))) d x = argmin Θ (cid:48) (cid:20)(cid:90) A D ( x )d x + (cid:90) A D ( T Θ (cid:48) ( x ))d x − (cid:90) A D m ( x ) D b ( T Θ (cid:48) ( x ))d x (cid:21) Note that the first term above is fixed during optimization,while the second term is invariant under rigid transformation.As a result, the above optimization is equivalent to maximizingthe cross-correlation: (cid:98) Θ = argmax Θ (cid:48) (cid:90) A D m ( x ) D b ( T Θ (cid:48) ( x ))d x (1)For differentiable D m and D b , the above optimization canbe solved with gradient-based methods. However, as detailedin Sec. III, the cross-correlation maximization problem in theurban AGV environment may have locally optimal solutionsin the vicinity of the global minimum due to repetitivestructure of radar reflectors. In applications with high integrityrequirements, a search for the globally optimal solution isnecessary. This paper notes that if the PHDs in (1) were to bediscretized in x , then the cross-correlation values can be eval-uated exhaustively with computationally efficient techniques.Let x pq denote the location at the ( p, q ) translational offset indiscretized A . Then (cid:98) Θ = argmax Θ (cid:48) P − (cid:88) p =0 Q − (cid:88) q =0 D m ( x pq ) D b ( (cid:98)T Θ (cid:48) ( x pq ) (cid:101) ) (2)where (cid:98) . (cid:101) denotes the nearest grid point in the discretizedspace.The technique developed above relies on the PHDs D m and D b . The next subsections detail the recipe for estimating thesePHDs from the radar observations. B. Estimating the map PHD from measurements
This section addresses the procedure to estimate the mapPHD D m ( x ) from radar measurements. This paper workswith an occupancy grid map (OGM) approximation to thecontinuous PHD function. In [16], it has been shown that thePHD representation is a limiting case of the OGM as the gridcell size becomes vanishingly small. Intuitively, let c pq denotethe grid cell region with center x pq , and let δc pq denote thearea of this grid cell, which is small enough such that no morethan one reflector may be found in any cell. Let p pq ( O ) denotethe occupancy probability of c pq , and let A be defined as theregion formed by the union of all c pq whose centers x pq fall3ithin A . Then, the expected number of radar reflectors E [ |A| ] in A is given by E [ |A| ] = (cid:88) c pq ∈A p pq ( O ) = (cid:88) c pq ∈A p pq ( O ) δc pq δc pq (cid:44) (cid:88) c pq ∈A ¯ D ( x pq ) δc pq = (cid:90) A ¯ D ( x pq )d x , as lim δc pq → where ¯ D ( x pq ) (cid:44) p pq ( O ) δc pq can be considered to be an approxi-mation of the PHD D ( x ) for x ∈ c pq since its integral over A is equal to the expected number of reflectors in A .The advantage of working with an OGM approximation ofthe PHD is two-fold: first, since the OGM does not attemptto model individual objects, it is straightforward to representarbitrarily-shaped objects, and second, in contrast to the “pointtarget” measurement model assumption in standard PHD fil-tering, the OGM can straightforwardly model occlusions dueto extended objects.At this point, the task of estimating D m ( x ) has beenreduced to estimating the occupancy probability of each gridcell in discretized A . Each grid cell c pq takes up one oftwo states: occupied ( O ) or free ( F ). Based on the radarmeasurement z k at each time k , the Bernoulli probabilitydistribution of such binary state cells may be recursivelyupdated with the binary Bayes filter. In particular, let z k denote all radar measurements made up to time k , and let l kpq ( O ) (cid:44) log p pq ( O | z k )1 − p pq ( O | z k ) (3)denote the log odds ratio of c pq being in state O . Also define l pq ( O ) as l pq ( O ) (cid:44) log p pq ( O )1 − p pq ( O ) with p pq ( O ) being the prior belief on the occupancy state of c pq before any measurements are made. With these definitions,the binary Bayes filter update is given by [17] l kpq ( O ) = log p pq ( O | z k )1 − p pq ( O | z k ) − l pq ( O ) + l k − pq ( O ) (4)where p pq ( O | z k ) is known as the inverse sensor model: itdescribes the probability of c pq being in state O , given onlythe latest radar scan z k .The required occupancy probability p pq ( O | z k ) is easyto compute from the log odds ratio in (3). Observe that theinverse sensor model p pq ( O | z k ) , in addition to the prioroccupancy belief p pq ( O ) , completely describes the procedurefor estimating the OGM from radar measurements, and henceapproximating the PHD. Adapting p pq ( O | z k ) to the char-acteristics of the automotive radar sensors, however, is notstraightforward, and will be the topic of Sec. IV-B. C. Estimating the batch PHD from measurements
The procedure for generating an approximation to D b ( x (cid:48) ) from a batch of radar measurements is identical to the pro-cedure for generating D m ( x ) from mapping vehicle data,except that precise, absolute location and orientation data isnot available during localization. Instead, inertial odometry isused to estimate the relative locations and orientations of eachradar scan in the batch, and the scans are transformed intoa common coordinate frame before updating the occupancystate of grid cells.Once the map and batch PHDs have been approximatedfrom radar measurements, the correlation-maximization tech-nique developed in Sec. II-A can be applied to find a solutionto the localization problem.III. R ELATED W ORK
This section reviews various techniques which may be ap-plicable to the radar-based mapping and localization problem.These include work on point cloud alignment and imageregistration techniques, occupancy grid-based mapping andlocalization, and random-finite-set-based mapping and local-ization.
Related work in point cloud alignment.
A radar-basedmap can have many different representations. One obviousrepresentation is to store all the radar measurements as apoint cloud. Fig. 1b is an example of this representation.Localization within this map can be performed with pointcloud registration techniques like the iterative closest point(ICP) algorithm. ICP is known to converge to local minimawhich may occur due to outlying points that do not havecorrespondences in the two point clouds being aligned. Anumber of variations and generalizations of ICP robust to suchoutliers have been proposed in the literature [14], [15], [18]–[21].However, automotive radar data in urban areas exhibitanother source of incorrect-but-plausible registration solutionswhich are not addressed in the above literature—repetitivestructure, e.g., due to a series of parked cars, may result in mul-tiple locally-optimal solutions within – of the globally-optimal solution. Gradient-based techniques which iterativelyestimate correspondences based on the distance between pairsof points are susceptible to converge to such locally-optimalsolutions. To demonstrate this phenomenon, the coherent pointdrift algorithm (CPD) [14], a robust generalization of ICP, isapplied to align the map and batch point cloud data collectedas part of this work. Fig. 2 shows an example of incorrectconvergence for CPD. Observe that the map point cloud inFig. 2, shown in blue, has four dominant parallel features: arow of parked cars flanked by a building front on each side ofthe street. In this scenario, when aligning a batch of radarscans shown in red, the CPD algorithm converges to a locally-optimal solution where two of the four parallel features lineup correctly. This solution is about south of the globally-optimal solution.Since a highly-accurate initial estimate of the vehicleposition may not always be available, accurate and robust4 ig. 2. The radar map point cloud is shown in blue and a batch ofradar scans is shown in red. Note that the map point cloud has four dominantparallel features: a row of parked cars flanked by a building front on each sideof the street. The two point clouds are aligned with the CPD algorithm untilconvergence. Note the offset of the two point clouds due to convergence to alocal cost minimum where two of the four parallel features line up correctly.Black arrows show a few of the matching point features between the map andthe batch. radar-based urban positioning demands an exhaustive searchover the initial search region. One way to achieve globally-optimal point cloud registration is to perform global pointcorrespondence based on distinctive feature descriptors insteadof choosing the closest point [22]. However, extraction andmatching of such features from cluttered automotive radar datais likely to be unreliable. In view of the above limitations, acorrelation-maximization-based globally-optimal solution hasbeen developed in this paper.A few of the above point cloud alignment algorithms havebeen applied to automotive radar data in the literature [19],[23]. The technique in [19] is only evaluated on a dataset, while [23] performs poorly on datasets larger than . The globally-optimal technique in [22] performs ad-mirably on large and challenging datasets, but uses a moresophisticated radar unit than is expected to be available onan AGV. Other point cloud alignment techniques listed abovehave not been tested with radar data. Related work in image registration and occupancygrid techniques.
Occupancy grid mapping and localizationtechniques have been traditionally applied for lidar-basedsystems, and recent work in [24], [25] has explored similartechniques with automotive radar data. In contrast to batch-based localization described in this paper, both [24] and [25]perform particle-filter based localization with individual scans,as is typical for lidar-based systems. These methods were onlyevaluated on small-scale datasets collected in a parking lot, andeven so, the reported localization accuracy was not sufficientfor lane-level positioning.Occupancy grid maps are similar to camera-based top-down images, and thus may be aligned with image registra-tion techniques, that may be visual-descriptor-based [26] or correlation-based [27]. Reliable extraction and matching ofvisual features, e.g., SIFT, is significantly more challengingwith automotive radar data. Correlation-based registration ismore robust, and forms the basis of the method developed inthis paper. A similar idea has previously been proposed in [27].In comparison to this paper, [27] provides no probabilisticjustification for the approach, assumes perfect odometry in-formation during generation of batches, does not propose acomputationally-efficient solution, and only estimates a two-dimensional translational offset with heading assumed to beperfectly known. Each of these extensions in the current paperis nontrivial.
Related work in random finite set mapping and lo-calization.
As mentioned in Sec. II, the occupancy gridis an approximation to the PHD function: a concept firstintroduced in the random finite set (RFS) based target trackingliterature. Unsurprisingly, PHD- and RFS-based mapping andlocalization have been previously studied in [28]–[30]. Incontrast to the grid-based approximate method developed inthis paper, techniques in [28]–[30] make the point targetassumption where no target may generate more than onemeasurement in a single scan, and no target may occludeanother target. However, in reality, planar and extended targetssuch as walls and building fronts are commonplace in theurban AGV environment. Mapping of ellipsoidal extendedtargets has recently been proposed in [31], but occlusions arenot modeled and only simulation results are presented.IV. I
MPLEMENTATION
Theoretical formulation of the localization technique ap-plied in this paper was described in Sec. II. This sectionpresents some of the implementation details necessary for acomputationally efficient globally-optimal solution, and de-scribes the challenge of choosing an inverse sensor model forlocalization based on automotive radar sensors.
A. Efficient Global Optimization
As outlined in Sec. III, repetitive patterns in the urbanAGV radar environment give rise to locally-optimal solutionsto the radar-based localization problem in the vicinity of theglobally-optimal solution. Accordingly, localization problemswith tight integrity requirements demand application of tech-niques to find the global minimum of (1). This section notesthat efficient globally-optimal procedures exist for maximizingthe discretized PHD correlation as defined in (2), and outlinestwo optimizations which further reduce the computationalcomplexity of the problem.
1) FFT-based Cross-Correlation:
For two-dimensional ve-hicle state estimation with perfect batch odometry, the cross-correlation in (2) is to be maximized over the three parametersof two-dimensional rigid transformation [∆ x, ∆ y, ∆ φ ] (cid:62) .For a given value of ∆ φ , the cross-correlation can bemaximized efficiently over ∆ t = [∆ x, ∆ y ] (cid:62) with FFT-basedcross-correlation. The size of the discretized map and batchPHDs to be correlated, denoted P × Q in (2), is limited bythe area scanned by the radar over a batch. Without loss of5enerality, let P = Q = n . Due to the convolution property ofthe FFT, the circular cross-correlation between n × n matrices D m ( x ) and D b ( T Θ ( x )) can be computed as D m ∗ D b = F − {F{ D m ( x ) } ◦ F{ D b ( −T Θ ( x )) }} where F denotes the FFT operator and ◦ the denotes element-wise multiplication operator. To compute the required linearcross-correlation, however, both D m and D b must be paddedwith n/ zeros on each side along each dimension, leadingto matrices ˇ D m and ˇ D b of size n × n . Then, the linearcross-correlation is D m (cid:63) D b = F − {F{ ˇ D m ( x ) } ◦ F{ ˇ D b ( −T Θ ( x )) }} (5)The two FFTs and one IFFT in (5) each have a computa-tional complexity k (2 n ) log (2 n ) ≈ kn log n where k is a constant factor dependent on the FFT implemen-tation. If the number of rotations to be examined are m , thisleads to a total complexity of m × kn log n = 24 kmn log n (6)One observation here is that F{ ˇ D m ( x ) } for the map isindependent of ∆ φ , and so must only be computed once. Thisreduces the overall complexity to k (2 m + 1) n log n .
2) Minimal Padding for Desired Linear Cross-Correlation:
Typically, the size of the map and batch PHDs to be correlated,given by P × Q , is much larger than the translational offsetsearch space due to initial uncertainty in the vehicle position.In other words, the admissible values of ∆ t lie within a smallfraction of the space scanned in the radar batch. Accord-ingly, the optimization method only requires the linear cross-correlation values within this admissible region. If n l denotesthe size of the translational search space in discretized PHDcoordinates, then D m and D b need only be padded with n l / zeros on each side along each dimension, leading to matrices ˇ D m and ˇ D b of size ( n + n l ) × ( n + n l ) . With minimal padding,the overall complexity of FFT-based correlation maximizationnow reduces to k (2 m + 1)( n + n l ) log ( n + n l ) ≈ k (2 m + 1) n log n where the approximation holds if n l (cid:28) n .
3) The FFT Rotation Theorem:
Observe from (5) that themethod re-computes the FFT after every rotation of the PHD D b . The FFT rotation theorem [32] states that a coordinaterotation in the spatial domain leads to the same coordinaterotation in the frequency domain, that is, if R ∆ φ representsthe rotation matrix which operates on the PHD, then F{ R ∆ φ · D b ( x ) } = R ∆ φ · F{ D b ( x ) } This implies that instead of performing m FFTs on rotatedreplicas of D b , a single FFT may be performed followedby m coordinate rotations. It must be noted that rotation of F{ D b ( x ) } could involve interpolation of values to non-integerindices, which may offset the computational advantage of this method. Experiments conducted as part of this paper sug-gest that nearest-neighbor interpolation, i.e., assigning valuefrom the nearest integer index (complexity O ( n ) ), has nodiscernible adverse effect on the performance of the algorithm.With application of the FFT rotation theorem, the overallcomplexity is reduced to k ( m + 2)( n + n l ) log ( n + n l ) + m ( n + n l ) ≈ kmn log n which is a factor of faster than the basic implementationin (6).Algorithm 1 provides the pseudocode for the optimizedFFT-based correlation maximization algorithm. For eachepoch, the algorithm is provided the prior map point cloud inthe true world frame, denoted p W m and a batch of k radar scansin the body frame, denoted p B b , k . An initial guess for vehicleposition and heading trajectories t V k and φ V k is provided ina frame V which is offset from the W frame by a rigid two-dimensional transform parameterized by Θ = [∆ x, ∆ y, ∆ φ ] (cid:62) .The initial guess uncertainties σ t and σ φ , and the desireddiscretization steps δt and δφ are also provided. The algorithmmust estimate the offset transformation (cid:98) Θ between W and V .The toOGM routine converts the provided point cloud to anoccupancy grid with the desired grid cell size, according tothe procedure described in Sec. II-B. The pad routine padsthe provided array with the desired number of zeros alongeach dimension on both ends. The three-dimensional matrix R holds the linear cross-correlation outputs. Algorithm 1: fastGlobalAlign
Input : p W m , p B b , k , t V k , φ V k , σ t , σ φ , δt , δφ Output: (cid:98) Θ D m = toOGM (cid:0) p W m − t Vk , δt (cid:1) ˜ D m = FFT2 ( pad ( D m , σ t )) p V b , k = R (cid:0) φ V k (cid:1) p B b , k + t V k D b = toOGM (cid:16) p V b , k − t Vk , δt (cid:17) ˜ D b = FFT2 ( pad ( D b , σ t )) n = 3 σ t /δt m = 3 σ φ /δφ for i = -m:m do ∆ φ = iδφ ˜ D ∆ φ b = rotate2 (cid:16) ˜ D b , ∆ φ (cid:17) R [ i, : , :] = IFFT2 (cid:16) ˜ D m ◦ conj (cid:16) ˜ D ∆ φ b (cid:17)(cid:17) [ − n : n, − n : n ] end (cid:98) Θ = argmax( R ) B. Automotive Radar Inverse Sensor Model
This section returns to the challenge of adapting the inversesensor model p pq ( O | z k ) to the measurement characteristicsof automotive radar sensors, introduced earlier in Sec. II-B.Fig. 3 shows a simplified radar scan z k of an underlying oc-cupancy grid. For clarity of exposition, four distinct categoriesof grid cells in Fig. 3 are defined below: • Type A : Grid cells in the vicinity of a radar range-azimuthreturn.6
Type B : Grid cells along the path between the radar sensorand
Type A grid cells. • Type C : Grid cells in the “viewshed” of the radar sensor,i.e., within the radar field-of-view and not shadowed byanother object, but not of
Type A or Type B . • Type D : Grid cells outside the field-of-view of the radar(
Type D1 ) or shadowed by other objects closer to theradar (
Type D2 ).The inverse sensor model must choose a p pq ( O | z k ) value foreach of these types of grid cells. In the following, the subscript pq is dropped for cleaner notation. Type A Type BType C Type D1 Type D2
Fig. 3. Schematic diagram showing four types of grid cells.
1) Conventional Choices for the Inverse Sensor Model:
Since z k provides no additional information on Type D gridcells, the occupancy in these cells is conditionally independentof z k , that is p D ( O | z k ) = p ( O ) where p ( O ) is the prior probability of occupancy definedearlier in Sec. II-A.Grid cells of Type B and
Type C may be hypothesized tohave low occupancy probability, since these grid cells werescanned by the sensor but no return was obtained. As a result,conventionally p B ( O | z k ) ≤ p ( O ) and p C ( O | z k ) ≤ p ( O ) Finally, grid cells of
Type A may be hypothesized to havehigher occupancy probability, since a return has been observedin the vicinity of these cells. Conventionally, p A ( O | z k ) ≥ p ( O ) In the limit, if the OGM grid cell size is comparable to thesensor range and angle uncertainty, or if the number of scans islarge enough such that the uncertainty is captured empirically,only the grid cells that contain the sensor measurement maybe considered to be of
Type A .
2) Automotive Radar Sensor Characteristics:
Intense clut-ter properties and sparsity of the automotive radar data com-plicate the choice of the inverse sensor model.
Sparsity.
First, sparsity of the radar scan implies that manyoccupied
Type A grid cells in the radar environment mightbe incorrectly categorized as free
Type C cells. This can beobserved in Fig. 1. As evidenced by the batch of scans inFig. 1d, the radar environment is “dense” in that many gridcells contain radar reflectors. However, any individual radarscan, such as the one shown in Fig. 1c, suggests a much moresparse radar environment. As a result, a grid cell which isoccupied in truth will be incorrectly categorized as
Type C inmany scans, and correctly as
Type A in a few scans.The sparsity of radar returns also makes it challenging todistinguish
Type C cells from cells of
Type D2 . Since manyoccluding obstacles are not detected in each scan, the occludedcells of
Type D2 are conflated with free cells of
Type C .In context of the inverse sensor model, as the radar scanbecomes more sparse p C ( O | z k ) → p D ( O | z k ) − where the superscript − denotes a limit approaching frombelow. Intuitively, approaching p D ( O | z k ) implies that themeasurement z k is very sparse in comparison to the trueoccupancy, and thus does not provide much information onlack of occupancy. Clutter.
Second, there is the matter of clutter. The grid cellsin the vicinity of a clutter measurement may be incorrectlycategorized as
Type A , and the grid cells along the pathbetween the radar and clutter measurement may be incorrectlycategorized as
Type B .In context of the inverse sensor model, as the radar scanbecomes more cluttered p B ( O | z k ) → p D ( O | z k ) − p A ( O | z k ) → p D ( O | z k ) + where the superscript + denotes a limit approaching fromabove.
3) A Pessimistic Inverse Sensor Model:
The results pre-sented in Sec. V are based on a pessimistic sensor model, suchthat p B ( O | z k ) = p C ( O | z k ) = p D ( O | z k ) . This modelassumes that the radar measurements provide no informationabout free space in the radar environment.In particular, the inverse sensor model assumes p B ( O | z k ) = p C ( O | z k ) = p D ( O | z k ) = p ( O ) = 0 . and p A ( O | z k ) = 0 . V. E
XPERIMENTAL R ESULTS
The radar-based urban positioning system was evaluatedexperimentally using the dataset described in [12], collectedon Thursday, May 9, 2019 and Sunday, May 12, 2019 duringapproximately . of driving each day in and around the ur-ban center of Austin, TX. This section presents the evaluationresults.7 . Data Collection Fig. 4 shows the route followed by the radar-instrumentedvehicle. Of particular interest is the southern part of theroute which combs through every street in the Austin, TXdowntown area. Urban canyons are the most challenging forprecise GNSS-based positioning [9] and would benefit themost from radar-based positioning. The route was driven onceon a weekday and again on the weekend to evaluate robustnessof the radar map to changes in traffic and parking patterns.
Fig. 4. Test route through The University of Texas west campus and Austindowntown. These areas are the most challenging for precise GNSS-basedpositioning and thus would benefit the most from radar-based positioning. Theroute was driven once on a weekday and again on the weekend to evaluaterobustness of the radar map to changes in traffic and parking patterns.
The Delphi electronically-scanning radar (ESR) and short-range radars (SRR2s) used in this study are mounted on anintegrated perception platform called the Sensorium, shownschematically in Fig. 5. The coverage patterns of the radarunits are shown in Fig. 6. The Sensorium’s onboard computertimestamps and logs the radar returns from the three radarunits.The reference position and orientation trajectory for thedata are generated with the iXblue ATLANS-C: a high-performance RTK-GNSS coupled fiber-optic gyroscope (FOG)inertial navigation system (INS). The post-processed positionsolution obtained from the ATLANS-C is decimeter-accuratethroughout the dataset.
B. Mapping
The first step to radar-based localization is generation of aradar map point cloud. Radar scans collected from the May 9,2019 session are aggregated to create a map with the benefitof the ATLANS-C reference trajectory. The map point cloud isstored in a k-d tree for efficient querying during localization.Two important notes are in order here. First, automo-tive radar clutter is especially intense when the vehicle isstationary. Accordingly, this paper disregards radar returns
LTE Antenna Triple-FrequencyGNSS AntennasDelphi ESR 2.5 RadarDelphi SRR2 Radars Basler acA2040-35gmCameras
Fig. 5. The University of Texas Sensorium is an integrated platformfor automated and connected vehicle perception research. It includes threeautomotive radar units, one electronically-scanning radar (ESR) and two short-range radars (SRR2s); stereo visible light cameras; automotive- and industrial-grade inertial measurement units (IMUs); a dual-antenna, multi-frequencysoftware-defined GNSS receiver; 4G cellular connectivity; and a powerfulinternal computer. An iXblue ATLANS-C CDGNSS-disciplined INS (notshown) is mounted at the rear of the platform to provide the ground truthtrajectory. −
100 m −
50 m 100 m200 m
SRR2 left SRR2 rightESR mediumESR long l l Fig. 6. Coverage patterns for the three Sensorium radar units. The ESRprovides simultaneous sensing in a narrow ( ± ◦ ) long-range (
175 m )coverage area and a wider ( ± ◦ ) medium-range (
60 m ) area. The SRR2units each have a coverage area of ± ◦ and
80 m . The line l marks theleft-most extent of the right SRR2’s field of view. Similarly, l marks theright-most extent of the left SRR2’s field of view. Each SRR2 is installedfacing outward from the centerline at an angle of ◦ . obtained when the vehicle is moving slower than − (approximately . miles per hour) for both mapping andlocalization. This implies that radar-based localization is onlyavailable during periods when the vehicle is moving faster than − . Second, it was observed that radar returns far from thevehicle are mostly clutter and have negligible resemblance tothe surrounding structure. As a result, this paper only considersradar returns with range less than
50 m . It must be noted8
East (m) N o r t h ( m ) (a)
40 20 0 20 40
East (m) N o r t h ( m ) (b) East offset (m) N o r t h o ff s e t ( m ) (c)Fig. 7. This figure shows an interesting example of radar-based urban positioning with the proposed method. Panel (a) shows the occupancy grid estimatedfrom the prior map point cloud. Panel (b) shows the same for a batch of scans collected in the same region. For ease of visualization, the batch occupancygrid has already been aligned with the map occupancy grid. Panel (c) shows the cross-correlation between the batch and map occupancy grids at ∆ φ = 0 ◦ .Given that no rotational or translational offset error has been applied to the batch, the correlation peak should appear at (0 , . The offset of the peak in panel(c) from (0 , is the translational estimate error of the proposed method. Also note the increased positioning uncertainty in the along-track direction, and thetwo local correlation peaks (marked with red squares in panel (c)) due to the repeating periodic pattern of radar reflectors in the map and the batch (markedwith red rectangles in panels (a) and (b)). that these two parameters have not been optimized to producethe smallest estimation errors; they have been fixed based onvisual inspection. C. Localization Results with Perfect Odometry
This section evaluates the localization performance of theproposed method on the May 12, 2019 radar data for thecase in which odometric drift over the radar batch-of-scansinterval is negligible. With decreasing quality of the odometrysensor(s), this assumption holds only over ever shorter batchintervals. Therefore, the performance of the algorithm isevaluated for a range of batch lengths.
1) Test procedure:
A drift-free vehicle trajectory over abatch-of-scans interval is generated with the reference solutionfrom the iXblue ATLANS-C. This trajectory is then artificiallyoffset by a two-dimensional rigid transformation error. Thetranslational error is distributed such that ∆ t ∼ N ( , σ t I ) with σ t = 2 m , and the rotational error is distributed suchthat ∆ φ ∼ N (0 , σ φ ) with σ φ = 3 ◦ . The proposed localizationtechnique takes the erroneously-offset position and headingtrajectory as the initial guess of the vehicle state.The prior radar map point cloud in the vicinity of the initialguess of the vehicle position is retrieved with a query to thek-d tree. Additionally, the batch of body-frame radar returnsis transformed to a common reference frame based on theerroneous trajectory. The goal is to align the two point cloudsand thereby estimate the artificially-induced translational androtational offset.As a first step, the map and batch occupancy grids aregenerated based on the aggregated point clouds, followingthe procedure described in Sec. II and IV. The extent ofthe occupancy grids is determined by the bounds of the areascanned by the radars during localization. Given the maximum range of radar returns considered here, the correlation regionis typically restricted to ±
50 m around the provided positionat the end of the batch. With a grid cell size of
10 cm , theoccupancy grid size in discrete coordinates is typically on theorder of n = 1000 . The translation search space is limitedto ± σ t = ± , resulting in n l = 120 . Similarly, therotation search space is limited to ± σ φ = ± ◦ with ◦ steps, resulting in m = 18 .
2) Five-second Batches:
This section evaluates and ana-lyzes the proposed method for a fixed batch length of .Fig. 7 shows an interesting example of radar-based urbanpositioning. For ease of visualization, no translation or rotationoffset error has been applied to the batch point cloud. Theoccupancy grid estimated from the batch point cloud isshown in Fig. 7b. Similarly, Fig. 7a shows the occupancy gridestimated from the map point cloud retrieved from the mapdatabase. Fig. 7c shows the cross correlation between the batchand map occupancy grids at ∆ φ = 0 ◦ . Given that no offseterror has been applied to the batch, one should expect thecorrelation peak to appear at (0 , in Fig. 7c. The offset ofthe peak from (0 , in this case is the translational estimateerror.Two other interesting features of the cross-correlation inFig. 7c are worth noting. First, the correlation peak decaysslower in the along-track direction—in this case approximatelyaligned with the south-southwest direction. This is a generalfeature observed throughout the dataset, since most of the radarreflectors are aligned along the sides of the streets. Second,there emerge two local correlation peaks offset by ≈ alongthe direction of travel. These local peaks are due to therepeating periodic structure of radar reflectors in both the mapand the batch occupancy grids. In other words, shifting thebatch occupancy grid forward or backward along the vehicle9 .0 0.2 0.4 0.6 0.8 1.0 Horizontal Position Error (m) C o m p l e m e n t a r y C u m u l a ti v e D i s t r i bu ti on Heading Error ( )
PositionHeading
Fig. 8. The complementary cumulative distribution (also known as a survivalfunction) indicates how often (that is, in what fraction of epochs) thelocalization procedure in the text was found to exceed a given level of error.The logarithmic vertical scale makes the tails of the distribution, correspondingto outliers that may cause tracking errors, more visible. For batches, the95-percentile horizontal positioning error is observed to be
44 cm and the95-percentile heading error is observed to be . ◦ . trajectory by ≈ aligns the periodically-repeating reflectorsin an off-by-one manner, leading to another plausible solution.Importantly, the uncertainty envelope of the initial positionpeak can span several meters, encompassing both the globaloptimum and one or more local optima. This explains whygradient-based methods, which seek the nearest optimum, arepoorly suited for use with automotive radar.The complementary cumulative distribution functions(CCDF), i.e., the fraction of epochs exceeding any givenlevel of error, of the horizontal position and heading errormagnitudes are plotted on a log scale in Fig. 8 for batches.In 95% of epochs, the horizontal position error magnitude wasno greater than
44 cm , and the heading error magnitude wasno greater than . ◦ .
3) Sensitivity to Batch Length:
The assumption of negligi-ble odometric drift over the batch-of-scans interval does nothold over for low-cost odometry sensors, but may holdfor longer than for high-performance sensors. Thus, isimportant to evaluate the proposed localization technique’ssensitivity to batch length.Fig. 9 shows the CCDF for different batch lengths between and . As expected, the errors are smaller for longer batchlengths. It is interesting to note that the -percentile errorsare similar for different batch lengths, but difference betweenthe CCDFs becomes more pronounced at higher percentiles.This indicates that the shorter batch lengths are adequate inmost cases, but the longer batches help contain the estimationerror in a few cases. Such behavior must be taken into accountin the integrity analysis of the system. For example, whileall batch lengths exhibit similar -percentile error behavior,batches shorter than cannot be used in applications witha
50 cm alert limit and integrity risk smaller than . perbatch. On the other hand, in applications where -percentileerror is the performance metric, it may be desirable to choose Horizontal Position Error (m) C o m p l e m e n t a r y C u m u l a ti v e D i s t r i bu ti on Fig. 9. CCDFs for different batch lengths between and . The -percentile errors are similar for shorter and longer batch lengths, but thedifference becomes more noticeable at higher percentiles. batches shorter than to relax the requirements on short termodometric performance. D. Sensitivity to Odometric Drift
Even over small batch durations, the odometric vehicletrajectory may exhibit non-negligible drift. While the proposedtechnique does not provide a means to estimate and eliminatesuch drift, it is important to study if such effects lead tocatastrophic degradation in performance.This section considers simple drift models for position andheading drift during batch generation. In particular, odometry-based orientation is typically computed as integrated angularrate (e.g., from a gyroscope), and odometry-based distanceis typically derived as either integrated speed (e.g., with ro-tary encoders, radar-based odometry), or as double-integratedacceleration (e.g., with an accelerometer). To a coarse ap-proximation, this section assumes that the largest source oferror in odometry is due to integration of residual biases. Thishas the effect of a linear drift in orientation estimates andeither linear (in case of integrated velocity measurements) orquadratic (in case of double-integrated acceleration) drift inposition estimates.As before, a drift-free vehicle batch trajectory is obtainedfrom the reference solution from the iXblue ATLANS-C. Inaddition to the rigid transformational offset error applied in theprevious subsection, a time-dependent position and headingdrift is introduced in to the batch trajectory. The trajectoryis then used to transform the body-frame radar returns into a common reference frame as before. Assuming that theodometric drift is small as compared to the rigid offset, theproposed approach is evaluated in its ability to estimate therigid offset.This section restricts the analysis to batches with threedifferent levels of pose drift. The level of drift is characterizedby the standard deviation of the position and heading errorat the end of the interval. The three pairs of drift levelsconsidered here are chosen such that they are relatively smallas compared to σ t and σ φ : (
10 cm , . ◦ ), (
20 cm , . ◦ ), and10 .0 0.2 0.4 0.6 0.8 1.0 Horizontal Position Error (m) C o m p l e m e n t a r y C u m u l a ti v e D i s t r i bu ti on quad 10 cm, 0.25quad 20 cm, 0.5quad 40 cm, 1.0linear 10 cm, 0.25linear 20 cm, 0.5linear 40 cm, 1.0 Fig. 10. CCDFs for batches with three different levels of trajectory driftduring batch generation. The solid lines represent quadratic drift in positionand linear drift in heading. The dashed lines represent linear drift in bothposition and heading. The three different colors represent standard deviationof the drift at the end of the interval: (
10 cm , . ◦ ), (
20 cm , . ◦ ), and(
40 cm , ◦ ). (
40 cm , ◦ ). Both the linear and quadratic error growth modelsfor position drift are explored.Fig. 10 shows the CCDF plots for different levels ofodometric drift and different position drift models. As ex-pected, the performance degrades with larger drift duringbatch generation. Nevertheless, the effect of impairment is notcatastrophic—with quadratic position and linear heading driftstandard deviation of
40 cm and ◦ at , respectively, the -percentile error in estimation of the rigid offset is
67 cm and . ◦ . VI. C ONCLUSION
A novel technique for robust 50-cm-accurate urban groundpositioning based on commercially-available low-cost automo-tive radars has been proposed. This is a significant develop-ment in the field of AGV localization, which has traditionallybeen based on sensors such as lidar and cameras that performpoorly in bad weather conditions. The proposed techniqueis computationally efficient yet obtains a globally-optimaltranslation and heading solution, avoiding local minima causedby repeating patterns in the urban radar environment. Perfor-mance evaluation on an extensive and realistic urban data setshows that, when coupled with stable short-term odometry,the technique maintains -percentile errors below
50 cm inhorizontal position and ◦ in heading.A CKNOWLEDGMENTS
This work has been supported by Honda R&D Americasthrough The University of Texas Situation-Aware VehicularEngineering Systems (SAVES) Center (http://utsaves.org/), aninitiative of the UT Wireless Networking and CommunicationsGroup; by the National Science Foundation under Grant No.1454474 (CAREER); and by the Data-supported Transporta-tion Operations and Planning Center (DSTOP), a Tier 1USDOT University Transportation Center. R
EFERENCES[1] D. Fajardo, T.-C. Au, S. Waller, P. Stone, and D. Yang, “Automatedintersection control: Performance of future innovation versus currenttraffic signal control,”
Transportation Research Record: Journal of theTransportation Research Board , no. 2259, pp. 223–232, 2011.[2] J. Choi, V. Va, N. Gonzalez-Prelcic, R. Daniels, C. R. Bhat, and R. W.Heath, “Millimeter-wave vehicular communication to support massiveautomotive sensing,”
IEEE Communications Magazine , vol. 54, no. 12,pp. 160–167, December 2016.[3] D. LaChapelle, T. E. Humphreys, L. Narula, P. A. Iannucci, andE. Moradi-Pari, “Automotive collision risk estimation under cooperativesensing,” in
Proceedings of the IEEE International Conference onAcoustics, Speech, and Signal Processing , Barcelona, Spain, 2020.[4] K. S. Yen, C. Shankwitz, B. Newstrom, T. A. Lasky, and B. Ravani,“Evaluation of the University of Minnesota GPS Snowplow DriverAssistance Program,” Tech. Rep., 2015.[5] M. Petovello, M. Cannon, and G. Lachapelle, “Benefits of using atactical-grade IMU for high-accuracy positioning,”
Navigation, Journalof the Institute of Navigation , vol. 51, no. 1, pp. 1–12, 2004.[6] B. M. Scherzinger, “Precise robust positioning with inertially aidedRTK,”
Navigation , vol. 53, no. 2, pp. 73–83, 2006.[7] H. T. Zhang, “Performance comparison on kinematic GPS integratedwith different tactical-grade IMUs,” Master’s thesis, The University ofCalgary, Jan. 2006.[8] S. Kennedy, J. Hamilton, and H. Martell, “Architecture and systemperformance of SPAN—NovAtel’s GPS/INS solution,” in
Position, Lo-cation, And Navigation Symposium, 2006 IEEE/ION . IEEE, 2006, p.266.[9] T. E. Humphreys, L. Narula, and M. J. Murrian, “Deep urban unaidedprecise GNSS vehicle positioning,”
IEEE Intelligent TransportationSystems Magazine , 2020, to be published.[10] T. E. Humphreys,
Springer Handbook of Global Navigation SatelliteSystems . Springer, 2017, ch. Interference, pp. 469–504.[11] L. Narula, J. M. Wooten, M. J. Murrian, D. M. LaChapelle, andT. E. Humphreys, “Accurate collaborative globally-referenced digitalmapping with standard GNSS,”
Sensors
Proceedings of the IEEE/ION PLANS Meeting , Portland, OR, 2020, tobe published.[13] R. P. Mahler, “Multitarget Bayes filtering via first-order multitargetmoments,”
IEEE Transactions on Aerospace and Electronic systems ,vol. 39, no. 4, pp. 1152–1178, 2003.[14] A. Myronenko and X. Song, “Point set registration: Coherent pointdrift,”
IEEE Transactions on Pattern Analysis and Machine Intelligence ,vol. 32, no. 12, pp. 2262–2275, 2010.[15] B. Jian and B. C. Vemuri, “Robust point set registration using Gaussianmixture models,”
IEEE Transactions on Pattern Analysis and MachineIntelligence , vol. 33, no. 8, pp. 1633–1645, 2010.[16] O. Erdinc, P. Willett, and Y. Bar-Shalom, “The bin-occupancy filterand its connection to the PHD filters,”
IEEE Transactions on SignalProcessing , vol. 57, no. 11, pp. 4232–4246, 2009.[17] S. Thrun, W. Burgard, and D. Fox,
Probabilistic robotics . MIT press,2005.[18] D. Chetverikov, D. Svirko, D. Stepanov, and P. Krsek, “The trimmediterative closest point algorithm,” in
Object recognition supported byuser interaction for service robots , vol. 3. IEEE, 2002, pp. 545–548.[19] E. Ward and J. Folkesson, “Vehicle localization with low cost radarsensors,” in . IEEE,2016, pp. 864–870.[20] Y. Tsin and T. Kanade, “A correlation-based approach to robust pointset registration,” in
European conference on computer vision . Springer,2004, pp. 558–569.[21] W. Gao and R. Tedrake, “FilterReg: Robust and efficient probabilisticpoint-set registration using gaussian filter and twist parameterization,”in
Proceedings of the IEEE Conference on Computer Vision and PatternRecognition , 2019, pp. 11 095–11 104.[22] S. H. Cen and P. Newman, “Radar-only ego-motion estimation indifficult settings via graph matching,” arXiv preprint arXiv:1904.11476 ,2019.
23] M. Holder, S. Hellwig, and H. Winner, “Real-time pose graph SLAMbased on radar,” in .IEEE, 2019, pp. 1145–1151.[24] F. Schuster, C. G. Keller, M. Rapp, M. Haueis, and C. Curio, “Landmarkbased radar SLAM using graph optimization,” in
Intelligent Transporta-tion Systems (ITSC), 2016 IEEE 19th International Conference on .IEEE, 2016, pp. 2559–2564.[25] M. Schoen, M. Horn, M. Hahn, and J. Dickmann, “Real-time radarSLAM.”[26] J. Callmer, D. T¨ornqvist, F. Gustafsson, H. Svensson, and P. Carlbom,“Radar SLAM using visual features,”
EURASIP Journal on Advancesin Signal Processing , vol. 2011, no. 1, p. 71, 2011.[27] K. Yoneda, N. Hashimoto, R. Yanase, M. Aldibaja, and N. Suganuma,“Vehicle localization using 76GHz omnidirectional millimeter-waveradar for winter automated driving,” in . IEEE, 2018, pp. 971–977. [28] J. Mullane, B.-N. Vo, M. D. Adams, and B.-T. Vo, “A random-finite-setapproach to Bayesian SLAM,”
IEEE Transactions on Robotics , vol. 27,no. 2, pp. 268–282, 2011.[29] H. Deusch, S. Reuter, and K. Dietmayer, “The labeled multi-BernoulliSLAM filter,”
IEEE Signal Processing Letters , vol. 22, no. 10, pp. 1561–1565, 2015.[30] M. St¨ubler, S. Reuter, and K. Dietmayer, “A continuously learningfeature-based map using a Bernoulli filtering approach,” in . IEEE, 2017, pp.1–6.[31] M. Fatemi, K. Granstr¨om, L. Svensson, F. J. Ruiz, and L. Hammarstrand,“Poisson multi-bernoulli mapping using Gibbs sampling,”
IEEE Trans-actions on Signal Processing , vol. 65, no. 11, pp. 2814–2827, 2017.[32] B. S. Reddy and B. N. Chatterji, “An FFT-based technique for transla-tion, rotation, and scale-invariant image registration,”
IEEE transactionson image processing , vol. 5, no. 8, pp. 1266–1271, 1996., vol. 5, no. 8, pp. 1266–1271, 1996.