The Adaptive Labeled Multi-Bernoulli Filter
TThe Adaptive Labeled Multi-Bernoulli Filter
Andreas Danzer, Stephan Reuter, Klaus Dietmayer
Institute of Measurement, Control, and Microtechnology, Ulm UniversityUlm, GermanyEmail: { andreas.danzer, stephan.reuter, klaus.dietmayer } @uni-ulm.de c (cid:13) Abstract —This paper proposes a new multi-Bernoulli filtercalled the Adaptive Labeled Multi-Bernoulli filter. It combinesthe relative strengths of the known δ -Generalized Labeled Multi-Bernoulli and the Labeled Multi-Bernoulli filter. The proposedfilter provides a more precise target tracking in critical situations,where the Labeled Multi-Bernoulli filter looses informationthrough the approximation error in the update step. In non-critical situations it inherits the advantage of the Labeled Multi-Bernoulli filter to reduce the computational complexity by usingthe LMB approximation. I. I
NTRODUCTION
The aim of multi-object tracking is the estimation of thenumber of objects as well as their individual states based onnoisy measurements, where missed detections and false alarmslead to ambiguities in the track-to-measurement association.Further, adequate models for the appearance and disappearanceof objects are required. Approaches to tackle multi-objecttracking are Joint Probabilistic Data Association (JPDA) [1],Multiple Hypotheses Tracking (MHT) [2], and the RandomFinite Set (RFS) based multi-object Bayes filter [3].Based on the mathematical tools of finite set statistics(FISST) [3], several approximations of the multi-object Bayesfilter have been proposed during the last decade. The Prob-ability Hypothesis Density (PHD) filter [4], [5], [6], [7] andthe Cardinalized PHD (CPHD) filter [8], [9] approximate themulti-object posterior density by the first statistical momentand, in case of the CPHD filter, the cardinality distribution.The Cardinality Balanced Multi-Target Multi-Bernoulli (CB-MeMBer) filter [10]) approximates the multi-object posteriorusing parameters of a multi-Bernoulli distribution and onlypropagates these parameters. In [11], the class of labeledRFSs as well as the first analytic implementation of the multi-object Bayes filter in form of the Generalized Labeled Multi-Bernoulli (GLMB) and the δ -GLMB filter are proposed. The δ -GLMB filter is shown to outperform the approximationsof the multi-object Bayes filter and the incorporation of thetrack labels in the filtering step significantly improves the trackextraction in sequential Monte-Carlo (SMC) implementations[11]. The labeled Multi-Bernoulli (LMB) filter [12] efficientlyapproximates the δ -GLMB filter by approximating the pos-terior after each update step using an LMB distribution. In[13], the LMB filter is shown to outperform PHD, CPHD, andmulti-Bernoulli filters and to achieve almost the same accuracyas the δ -GLMB filter. Further, the LMB filter is successfullyused for the real-time environment perception system of the autonomous car of Ulm University [14], [15], [16] based onradar, lidar, and video sensors.In [12], the approximation error of the LMB filter withrespect to the cardinality distribution is illustrated in detail.Due to the assumption of statistically independent objects, theLMB representation does not facilitate multi-modal cardinalitydistributions. In contrast to the δ -GLMB filter which uses mul-tiple hypotheses to represent the data association uncertainty,the LMB filter represents the uncertainty within the spatialdistribution of each track. Hence, depending on mergingand pruning thresholds applied for the spatial distributionsof the LMB filter, this representation may also lead to aloss of information. Consequently, an adaptive multi-objecttracking algorithm, which represents the tracks in δ -GLMBrepresentation in challenging scenarios (e.g. objects are closeby or track-to-measurement association is ambiguous) anduses the computationally efficient LMB representation in allother scenarios, is expected to outperform the LMB filter andto adjust the computational complexity to the complexity ofthe scenario.In this contribution, the Adaptive Labeled Multi-Bernoulli(ALMB) filter is proposed which automatically switches be-tween an LMB and δ -GLMB representation based on KullbackLeibler divergence [17] and entropy [18]. The ALMB filterrepresents the multi-object posterior at each time step using aset of LMB and δ -GLMB distributions. Thus, only a subsetof tracks is required to be represented in δ -GLMB form. TheALMB filter is compared to the δ -GLMB and the LMB filterusing two simulations.This paper is organized as follows: First, the basics of(labeled) random finite sets are outlined and the LMB filter aswell as the δ -GLMB filter are introduced. Section V introducesthe switching criteria and the scheme of the ALMB filter isdetailed in Section VI. Finally, simulation results are shownin Section VII. II. B ACKGROUND
This sections summarizes multi-object tracking using ran-dom finite sets (RFS) and introduces the labeled multi-Bernoulli and the generalized labeled multi-Bernoulli RFS.
A. Random Finite Sets
An RFS is a finite-set-valued random variable with a randomnumber of points, which are also random and unordered. TheRFS X = { x (1) , . . . , x ( N ) } ⊂ X represents the multi-objectstate X with finite single-target state vectors x ( i ) ∈ X , where X a r X i v : . [ c s . S Y ] D ec s the state space. Further, the RFS Z = { z (1) , . . . , z ( M ) } ⊂ Z represents the multi-object observations with a random mea-surement z ( i ) out of the measurement space Z . The finite setstatistics introduced in [3] are a powerful mathematical toolfor dealing with RFSs. B. Multi-Bernoulli RFS
A Bernoulli RFS is empty with a probability − r and hasprobability r of being a singleton with a distribution p definedon X . Its probability density is given by (see [3]) π ( X ) = (cid:40) − r X = ∅ ,r · p ( x ) X = { x } . (1)The cardinality distribution is a Bernoulli distribution withparameter r .A multi-Bernoulli RFS is the union of M independentBernoulli RFSs X ( i ) , thus, X = (cid:83) Mi =1 X ( i ) and is completelydescribed by the parameter set { ( r ( i ) , p ( i ) ) } Mi =1 , where r ( i ) isthe existence probability and p ( i ) the spatial distribution. C. Labeled Multi-Bernoulli RFS
In a multi-object scenario, it is often required to estimatethe identity of an object in addition to its current state. Forthat reason, the class of labeled RFSs [11] appends a label (cid:96) ∈ L to each state state vector x ∈ X . Thus, a labeled RFSis an RFS on X × L with state space X and finite label space L . In the following, labeled state vectors x = ( x, (cid:96) ) as well aslabeled RFSs X are represented by bold letters.Using the projection L : X × L → L defined by L (( x, (cid:96) )) = (cid:96) , the distinct label indicator ∆( X ) = δ | X | ( |L ( X ) | ) , (2)where L ( X ) = {L ( x ) : x ∈ X } is the set of labels, ensuresthat labels (cid:96) of a realization are distinct.Similar to the multi-Bernoulli RFS, a labeled multi-Bernoulli (LMB) RFS is completely defined by the parameterset { ( r ( (cid:96) ) , p ( (cid:96) ) ) } (cid:96) ∈ L and its density is given by (see [12]) π ( X ) = ∆( X ) w ( L ( X )) p X , (3)where w ( L ) = (cid:89) i ∈ L (cid:16) − r ( i ) (cid:17) (cid:89) (cid:96) ∈ L L ( (cid:96) ) r ( (cid:96) ) − r ( (cid:96) ) , (4) p ( x, (cid:96) ) = p ( (cid:96) ) ( x ) . (5)The cardinality distribution of an LMB RFS is identical to theone of its unlabeled version and is given by ρ LMB ( n ) = (cid:89) i ∈ L (cid:16) − r ( i ) (cid:17) (cid:88) I ∈F n ( L ) (cid:89) (cid:96) ∈ I L ( (cid:96) ) r ( (cid:96) ) − r ( (cid:96) ) , (6)where F n ( L ) denotes all subsets of L with exactly n elements. D. δ -Generalized Labeled Multi-Bernoulli RFS A generalized labeled multi-Bernoulli (GLMB) RFS [11] isa labeled RFS with state space X and (discrete) label space L distributed according to π ( X ) = ∆( X ) (cid:88) c ∈ C w ( c ) ( L ( X )) (cid:104) p ( c ) (cid:105) X , (7)where C is a discrete index set and (cid:88) L ⊆ L (cid:88) c ∈ C w ( c ) ( L ) = 1 , (8) (cid:90) p ( c ) ( x, (cid:96) )d x = 1 . (9)A δ -generalized labeled multi-Bernoulli ( δ -GLMB) RFS[11] with state space X and (discrete) label space L is a specialcase of a generalized labeled multi-Bernoulli RFS with C = F ( L ) × Ξ , (10) w ( c ) ( L ) = w ( I,ξ ) δ I ( L ) , (11) p ( c ) = p ( I,ξ ) = p ( ξ ) (12)where F n ( L ) denotes all subsets of L , the discrete space Ξ represents the history of track to measurement associationswith realizations ξ ∈ Ξ and I is a set of track labels. Thedensity of a δ -GLMB RFS is given by π ( X ) = ∆( X ) (cid:88) ( I,ξ ) ∈F ( L ) × Ξ w ( I,ξ ) δ I ( L ( X )) (cid:104) p ( ξ ) (cid:105) X (13)and its cardinality distribution follows ρ δ -GLMB ( n ) = (cid:88) ( I,ξ ) ∈F n ( L ) × Ξ w ( I,ξ ) . (14)Obviously, an LMB RFS is a special case of a δ -GLMB RFSwith only one single component, i.e. p ( ξ ) ( x, (cid:96) ) = p ( (cid:96) ) ( x ) : π ( X ) = ∆( X ) (cid:88) I ∈F ( L ) w ( I ) δ I ( L ( X )) p X , (15)where the weights of the components follow (4).III. δ -G ENERALIZED L ABELED M ULTI -B ERNOULLI F ILTER
The δ -GLMB filter was introduced in [11], where it isshown that GLMBs and δ -GLMBs are conjugate priors withrespect to the prediction and update equations of the multi-object Bayes filter [3]. Note: the number of components increases due to the association uncer-tainty. . Prediction
The prediction of a δ -generalized labeled multi-Bernoulli ofthe form (13) to the time of the next measurement is given by π + ( X ) = ∆( X ) (cid:88) ( I + ,ξ ) ∈F ( L + ) × Ξ w ( I + ,ξ )+ δ I + ( L ( X )) (cid:104) p ( ξ )+ (cid:105) X , (16)where w ( I + ,ξ )+ = w B ( I + ∩ B ) w ( ξ ) S ( I + ∩ L ) , (17) p ( ξ )+ ( x, (cid:96) ) = 1 L ( (cid:96) ) p ( ξ ) S ( x, (cid:96) ) + 1 B ( (cid:96) ) p B ( x, (cid:96) ) , (18) p ( ξ ) S ( x, (cid:96) ) = (cid:104) p S ( · , (cid:96) ) f ( x |· , (cid:96) ) , p ( ξ ) ( · , (cid:96) ) (cid:105) η ( ξ ) S ( (cid:96) ) , (19) η ( ξ ) S ( (cid:96) ) = (cid:90) (cid:104) p S ( · , (cid:96) ) f ( x |· , (cid:96) ) , p ( ξ ) ( · , (cid:96) ) (cid:105) d x, (20) w ( ξ ) S ( L ) = (cid:104) η ( ξ ) S (cid:105) L (cid:88) I ⊆ L I ( L ) (cid:104) q ( ξ ) S (cid:105) I − L w ( I,ξ ) , (21) q ( ξ ) S = (cid:68) q S ( · , (cid:96) ) , p ( ξ ) ( · , (cid:96) ) (cid:69) . (22)In (16)-(22), w B ( · ) is the weight of the birth labels I + ∩ B and w ( ξ ) S ( · ) of the surviving labels I + ∩ L . Further, p B ( · , · ) is thedensity of new-born objects and p ( ξ ) S ( · , · ) of surviving objects,depending on the transition density f ( x |· , (cid:96) ) weighted by theprobability of survival p S ( · , (cid:96) ) and the prior density p ( ξ ) ( · , (cid:96) ) .Besides, (cid:104) f, g (cid:105) = (cid:82) f ( x ) g ( x ) dx denotes the inner product, η ( ξ ) S ( (cid:96) ) is a normalization constant and q S ( · , (cid:96) ) = 1 − p S ( · , (cid:96) ) the probability that a track disappears. B. Update
The posterior density after the measurement update of (16)is again a δ -GLMB RFS given by π ( X | Z ) = ∆( X ) (cid:88) ( I + ,ξ ) ∈F ( L + ) × Ξ (cid:88) θ ∈ Θ w ( I + ,ξ,θ ) ( Z ) × δ I + ( L ( X )) (cid:104) p ( ξ,θ ) ( ·| Z ) (cid:105) X (23)where w ( I + ,ξ,θ ) ( Z ) ∝ δ θ − ( { | Z |} ) ( I + ) w ( I + ,ξ )+ (cid:104) η ( ξ,θ ) Z (cid:105) I + , (24) p ( ξ,θ ) ( x, (cid:96) | Z ) = p ( ξ )+ ( x, (cid:96) ) ψ Z ( x, (cid:96) ; θ ) η ( ξ,θ ) Z ( (cid:96) ) , (25) η ( ξ,θ ) Z ( (cid:96) ) = (cid:68) p ( ξ )+ ( · , (cid:96) ) , ψ Z ( · , (cid:96) ; θ ) (cid:69) , (26) ψ Z ( x, (cid:96) ; θ ) = δ ( θ ( (cid:96) )) q D ( x, (cid:96) )+ (1 − δ ( θ ( (cid:96) ))) p D ( x, (cid:96) ) g ( z θ ( (cid:96) ) | x, (cid:96) ) κ ( z θ ( (cid:96) ) ) . (27)In (23)-(27), θ ∈ Θ : I + → { , , . . . , | Z |} associates tracklabels to measurements, where θ ( i ) = 0 represents a missingdetection and θ ( i ) = θ ( j ) > implies i ≡ j . Note, theposterior sets of track labels correspond to the predicted sets oftrack labels, i.e. I = I + . Here, w ( I + ,ξ,θ ) is the updated weightof a hypothesis ( I + , ξ, θ ) . Further, η ( ξ,θ ) Z ( (cid:96) ) is a normalization constant and ψ Z ( x, (cid:96) ; θ ) is the measurement likelihood. Thelikelihood depends on the probability of a missing detection q D ( x, (cid:96) ) = 1 − p D ( x, (cid:96) ) at ( x, (cid:96) ) and the spatial likelihood g ( z θ ( (cid:96) ) | x, (cid:96) ) weighted by the detection probability p D ( x, (cid:96) ) at ( x, (cid:96) ) . κ ( z θ ( (cid:96) ) ) = λ c c ( z ) models the intensity of Poissonclutter. IV. L ABELED M ULTI -B ERNOULLI F ILTER
The Labeled Multi-Bernoulli (LMB) filter was proposed in[12] and is intended as a fast and accurate approximationof the δ -GLMB filter. While an LMB RFS is a conjugateprior with respect to the prediction equations, the filter updaterequires a transformation to δ -GLMB form and a subsequentapproximation. A. Prediction
For a multi-object posterior LMB RFS with parameter set π = { ( r ( (cid:96) ) , p ( (cid:96) ) ) } (cid:96) ∈ L on X × L and a multi-object LMB birthdensity π B = { ( r ( (cid:96) ) B , p ( (cid:96) ) B ) } (cid:96) ∈ B on X × B , the multi-objectprediction is also an LMB RFS with state space X and finitelabel space L + = B ∪ L and is given by π + ( X ) = (cid:110)(cid:16) r ( (cid:96) )+ ,S , p ( (cid:96) )+ ,S (cid:17)(cid:111) (cid:96) ∈ L ∪ (cid:110)(cid:16) r ( (cid:96) ) B , p ( (cid:96) ) B (cid:17)(cid:111) (cid:96) ∈ B , (28)where r ( (cid:96) )+ ,S = η S ( (cid:96) ) r ( (cid:96) ) , (29) p ( (cid:96) )+ ,S = (cid:104) p S ( · , (cid:96) ) f ( x |· , (cid:96) ) , p ( · , (cid:96) ) (cid:105) η S ( (cid:96) ) , (30) η S ( (cid:96) ) = (cid:90) (cid:104) p S ( · , (cid:96) ) f ( x |· , (cid:96) ) , p ( · , (cid:96) ) (cid:105) d x. (31)In (28)-(31), p S ( · , (cid:96) ) denotes the state dependent survivalprobability and f ( x |· , (cid:96) ) the single target transition densityfor track (cid:96) . Further, η S ( (cid:96) ) is a normalization constant. B. Update
Since an LMB RFS is not a conjugate prior with respect tothe measurement update of the multi-object Bayes filter, theLMB filter update transforms the predicted LMB RFS to acorresponding δ -GLMB RFS using (15) and subsequently ap-plies the δ -GLMB update. In order to reduce the computationalcomplexity, the LMB components and the measurements arepartitioned into approximately statistically independent groups(see [12] for a detailed explanation).In order to close the LMB filter recursion, an approximationof the updated δ -GLMB distribution using an LMB RFS isrequired, i.e. π ( X ) ≈ (cid:101) π ( X ) = (cid:110)(cid:16) r ( (cid:96) ) , p ( (cid:96) ) (cid:17)(cid:111) (cid:96) ∈ L . (32)The parameters of the LMB RFS are obtained from theupdated δ -GLMB components using r ( (cid:96) ) = (cid:88) ( I + ,θ ) ∈F ( L + ) × Θ I + w ( I + ,θ ) ( Z )1 I + ( (cid:96) ) , (33) p ( (cid:96) ) ( x ) = 1 r ( (cid:96) ) (cid:88) ( I + ,θ ) ∈F ( L + ) × Θ I + w ( I + ,θ ) ( Z )1 I + ( (cid:96) ) p ( θ ) ( x, (cid:96) | Z) , (34)here Θ I + denotes the space of mappings. The weights w ( I + ,θ ) and the densities p ( θ ) are computed using (23)-(27)by setting ξ = ∅ .The LMB approximation does not affect the spatial distri-butions of the individual tracks but due to the assumption ofstatistically independent tracks within an LMB RFS, the cardi-nality distribution of the approximation may differ whereas themean cardinality is identical (see [12] for additional details).V. S WITCHING C RITERIA
The aim of the ALMB filter is to switch automatically be-tween the LMB approximation, facilitating a fast propagationof the density, and the more accurate δ -GLMB density. Inthe following, two switching criteria are introduced where theKullback-Leibler criterion evaluates the approximation error ofthe LMB approximation and the Entropy criterion considersthe data association uncertainty. A. Kullback-Leibler Criterion
In [12], it was shown that the LMB approximation losesinformation about the cardinality distribution. Thus, an intu-itive way to detect the information loss, is to examine thedifference between the posterior cardinality distribution of the δ -GLMB RFS and its LMB approximation. The Kullback-Leibler (KL) divergence [17] is a measure to compare two(discrete) probability distributions P and Q : D KL ( P (cid:107) Q ) = (cid:88) i P ( i ) · log P ( i ) Q ( i ) , (35)where Q is the approximation of P and the i -th term is zero,if P ( i ) = 0 since lim x → x log( x ) = 0 . (36)The cardinality distributions of the δ -GLMB RFS ρ δ -GLMB ( n ) and its LMB approximation ρ LMB ( n ) are given by(14) and (6), hence the Kullback-Leibler criterion calculates D KL ( π ( X )) = D KL ( ρ δ -GLMB (cid:107) ρ LMB ) , (37)where π ( X ) denotes the updated δ -GLMB RFS which facili-tates the calculation of its LMB. For D KL = 0 , the cardinalitydistributions are identical, i.e. the LMB approximation causesno information loss in the cardinality distribution. D KL > implies that ρ LMB differs from ρ δ -GLMB , where a large valueof D KL can be interpreted as big difference between thecardinality distributions, i.e., a large approximation error. B. Entropy Criterion
The δ -GLMB RFS comprises several hypotheses to capturethe data association uncertainties, whereas the LMB RFScaptures the association uncertainty within the spatial distribu-tions of a individual track. Depending on the parameters andthe post-processing of the spatial distributions, the δ -GLMBrepresentation is in general more accurate in challengingsituations. For example, two Gaussian components obtained bydifferent track-to-measurement associations may be merged ina Gaussian Mixture (GM) LMB filter which results in a loss of information compared to the δ -GLMB representation holdingthe two associations in different hypotheses.The Kullback-Leibler criterion does not detect challengingsituations with ambiguous data association if the cardinalitydistributions are identical. Hence, a measure for the dataassociation uncertainty is required which enables a switchingto the more accurate δ -GLMB representation in these situa-tions. The entropy [18] is a measure of unpredictability ofinformation content and is widely used in information theory.In this contribution, the entropy is used to evaluate the track-to-measurement association. Following [18], the entropy is H ( P ) = − (cid:88) i P ( x i ) log P ( x i ) , (38)where P ( x i ) is the probability that the event x i occurs. Ob-viously, for small or large values P ( x i ) , the entropy is small.This fact can be used to detect ambiguous data associations.The assocation matrix of tracks to measurements is givenby A = r ( (cid:96) ,z ) r ( (cid:96) ,z ) · · · r ( (cid:96) ,z m ) r ( (cid:96) ,z ) r ( (cid:96) ,z ) · · · r ( (cid:96) ,z m ) ... ... . . . ... r ( (cid:96) n ,z ) r ( (cid:96) n ,z ) · · · r ( (cid:96) n ,z m ) , (39)where r ( (cid:96) i ,z j ) = (cid:88) ( I + ,θ ) ∈F ( L + ) × Θ I + w ( I + ,θ ) ( Z )1 I + ( (cid:96) i ) δ θ ( (cid:96) i ) ( j ) (40)is the probability that track (cid:96) i is assigned to measurement z j .Further, θ ∈ Θ I + : I + → { , , . . . , | Z |} is a mapping oflabels to measurements in such a way that θ ( i ) = θ ( j ) > implies i ≡ j .An unambiguous assignment of measurement z j is charac-terized by the column vector a j with one value r ( (cid:96) i ,z j ) ≈ andall other values r ( (cid:96) k ,z j ) ≈ , k = 1 , , . . . , i − , i + 1 , . . . , n .With the above-mentioned property of entropy, such a columnvector results in a small value H ( a j ) . Hence, the entropy fora distribution π ( X ) is H ( π ( X )) = m (cid:88) j =1 H ( a j ) , (41)where a small value indicates an unambiguous data associationand a large value represents an uncertain track-to-measurementassignment.VI. T HE A DAPTIVE L ABELED M ULTI -B ERNOULLI F ILTER
The δ -GLMB filter of [11] is shown in [13], [19], [20] tooutperform the LMB filter [12] in challenging scenarios, e.g.containing closely spaced objects in combination with misseddetections and false alarms, at the cost of a significantly highercomputational complexity. The main idea of the Adaptive La-beled Multi-Bernoulli (ALMB) filter proposed in this sectionis to combine the advantages of the δ -GLMB filter in criticalsituations with the efficiency of the LMB filter. Thus, theALMB filter uses the δ -GLMB distribution to represent tracks pdatePruningSplitting MergingPrediction mentsMeasure-ExtractionTrackModelBirth Fig. 1. Scheme of the proposed ALMB filter. in critical situations as accurate as possible and uses LMBdistributions for the representation of all other tracks.Using the assumption, that well separated objects are statis-tically independent of each other, the ALMB filter uses severalindependent multi-object distributions in LMB and δ -GLMBform to represent the environment, i.e. π δ -GLMB ( X ( δ ) ) = (cid:110) π ( i ) δ -GLMB ( X ( i ) ) (cid:111) n δ i =1 , (42) π LMB ( X ( L ) ) = (cid:110) π ( i ) LMB ( X ( i ) ) (cid:111) n L i =1 , (43)where n δ is the number of δ -GLMB distributions and n L denotes the number of LMB distributions. Consequently, themulti-object posterior density is given by the set π ( X ) = (cid:110) π δ -GLMB ( X ( δ ) ) , π LMB ( X ( L ) ) (cid:111) . (44)Fig. 1 illustrates the scheme of the ALMB filter whichpropagates the density (44) over time. Obviously, each com-ponent of the ALMB filter is required to be able to handleboth representations, LMB and δ -GLMB. In the following,the individual components of the ALMB filter are presentedin detail. A. Birth Model
The birth model is responsible for initializing new tracks.The ALMB filter may use a static birth model [11] requiringknown birth locations or an adaptive birth model [12], [13]which facilitates the appearance of objects anywhere in thestate space. Due to the structure of the ALMB filter, severalnew-born objects may not be represented using a single LMBdistribution since this would require an additional splittingof distributions before the measurement update. Hence, eachnew-born object (cid:96) B is represented by an individual LMBdistribution consisting of a single component π ( (cid:96) B ) B = (cid:110) r ( (cid:96) B ) B ( z ) , p ( (cid:96) B ) B ( x ) (cid:111) . (45) B. Prediction
In the prediction step, the ALMB filter predicts each dis-tribution π ( i ) δ -GLMB ( X ( i ) ) , i = 1 , . . . , n δ , using the standard δ -GLMB prediction equations. Further, the standard LMBprediction is applied for each π ( i ) LMB ( X ( i ) ) , i = 1 , . . . , n L . C. Measurements
Since the ALMB filter holds multiple multi-object distribu-tions at the same time, not every received measurement affectseach multi-object density. Hence, the measurements moduleperforms a gating procedure for the distributions and the set ofmeasurements. Obviously, this gating procedure is important for a parallel execution of the update step in the manner of[12].The measurements module always performs the assignmentof observations z ( i ) ∈ Z based on the LMB distribution.Consequently, a δ -GLMB RFS π ( i ) δ -GLMB ( X ( i ) ) has to be ap-proximated by an LMB RFS (cid:101) π ( i ) LMB ( X ( i ) ) according to (32).This temporary approximation facilitates a faster observationto distribution association. Since the spatial distribution ofthe LMB approximation is equivalent [12], the temporaryapproximation does not influence the gating result.After that, the gating examines if a received observation z ( i ) affects the track (cid:96) of the LMB distribution with d MHD (ˆ z ( (cid:96) )+ , z ( i ) ) < √ γ z , (46)whereby ˆ z ( (cid:96) )+ is the predicted measurement of track (cid:96) and γ z isthe gating distance threshold. The value of γ z depends on thedesired σ -gate of the confidence interval and can be calculatedusing the inverse Chi-squared cumulative distribution. D. Merging
Due to the new-born objects represented by individual LMBRFSs and the prediction of the existing multi-object densities,it is possible that distributions influence each other in theupdate step. Hence the merging combines all multi-objectdensities with common measurements.The measurements module assigned a set of measurementsZ ( i ) to the predicted multi-object density π ( i )+ . Consequently,two predicted densities π ( i )+ and π ( j )+ with common measure-ments Z ( i ) ∩ Z ( j ) (cid:54) = ∅ (47)have to be merged into a single multi-object distribution π ( i,j )+ .The merging itself depends on the representations of π ( i )+ and π ( j )+ and is introduced in the following.
1) Merging of LMBs:
The merging of two LMB RFSsis used as in [12] after the parallel group update step. TwoLMB densities π ( i ) ( X ( i ) ) and π ( j ) ( X ( j ) ) are merged to thedistribution π ( i,j ) ( X ( i,j ) ) = π ( i ) ( X ( i ) ) ∪ π ( j ) ( X ( j ) ) . (48)
2) Merging of δ -GLMBs: The merging of two δ -GLMBRFS is not as simple as the LMB merging, because allcombinations of the hypotheses of the two RFSs have to beconsidered. To calculate the combined δ -GLMB RFS, eachcomponent of π ( i ) δ -GLMB ( X ( i ) ) has to be multiplied with eachcomponent of π ( j ) δ -GLMB ( X ( j ) ) resulting in π ( i,j ) ( X ( i,j ) ) = ∆( X ( i,j ) ) × (cid:88) ( I,ξ ) ∈F ( L ( i ) ) × Ξ ( i ) w ( I,ξ ) δ I ( L ( X ( i ) ) (cid:104) p ( ξ ) (cid:105) X ( i ) (49) × (cid:88) (˜ I, ˜ ξ ) ∈F ( L ( j ) ) × Ξ ( j ) w (˜ I, ˜ ξ ) δ ˜ I ( L ( X ( j ) )) (cid:104) p (˜ ξ ) (cid:105) X ( j ) . (50)Hence, the merged number of components is given by theproduct of the individual number of components. ) Merging of an LMB with a δ -GLMB: The merging of anLMB RFS with a δ -GLMB RFS always results in a δ -GLMB.First, the LMB π ( i ) LMB ( X ) is transformed into a corresponding δ -GLMB π ( i ) δ -GLMB ( X ) using (15). Due to the fact that bothdensities are now in δ -GLMB form, the δ -GLMB mergingaccording to (50) is used to merge the densities. E. Update
According to the representation of the predicted multi-objectdensity, the ALMB filter performs either a δ -GLMB or aLMB update, which are given by (23) and (32), respectively.Observe: in case of the LMB update it is required to store theresulting δ -GLMB density of the update step in addition forthe following steps.After the update, the ALMB filter uses the criteria presentedin Section V to decide whether a switching of the multi-objectrepresentation is necessary or not. An updated LMB RFSindicates a noncritical situation before the update, but it ispossible that the correction step changed this fact. Therefore,the filter examines the cardinality distributions of the LMBapproximation (cid:101) π ( X ) and the δ -GLMB posterior π ( X ) usingthe Kullback-Leibler criterion (see Section V-A) to detect aninformation loss and Entropy criterion (see V-B) to handleambiguous track-to-measurement associations. If one of thecriteria detects a critical situation, i.e. the KL divergence orthe entropy exceed an application-specific threshold, the filterreplaces the LMB approximation by the δ -GLMB RFS ob-tained during the filter update and propagates the incorporatedtracks using the δ -GLMB filter in the next filter cycle.If a loss of information caused a switching, the filter usesthe KL divergence to examine whether the critical situation issolved. Otherwise, the Entropy criterion is used. This impliesthat only the criterion which detected the critical situation, cantrigger switching back to propagating a set of tracks using anLMB RFS. Once the KL divergence or the entropy fall belowthe threshold, a challenging situation is resolved. F. Pruning
Both, the LMB update and the δ -GLMB update, producecomponents with negligible influence. To reduce the compu-tational cost, the pruning removes these components.The LMB pruning removes all tracks (cid:96) with marginalexistence probability r ( (cid:96) ) , so the resulting LMB RFS is (cid:101) π ( i ) LMB = (cid:110)(cid:16) r ( (cid:96) ) , p ( (cid:96) ) (cid:17) : r ( (cid:96) ) > µ r (cid:111) (cid:96) ∈ L ( i ) , (51)where µ r represents the application dependent minimum ex-istence probability.The δ -GLMB pruning removes all hypotheses ( I, ξ, θ ) withinsignificant weight w ( I,ξ,θ ) , which leads to (cid:101) π ( i ) δ -GLMB = (cid:110) ( I, ξ, θ ) : w ( I,ξ,θ ) > µ w (cid:111) I ∈F ( L ( i ) ) (52)using the threshold µ w . G. Splitting
In the course of time, it is possible that tracks in a densitymove apart, so that the RFS can be splitted into multiplesmaller distributions of the same type. The splitting usesthe grouping algorithm of [12]. Therefore, a δ -GLMB RFStemporary is approximated by an LMB RFS according to (32)during the splitting procedure. Then, the partitioning schemeis applied to find a possible splitting of the RFS.The module splits an LMB distribution π ( i ) in multiple newdensities π ( j ) such that π ( i ) = N (cid:91) j =1 π ( j ) , (53)where N is the number of identified object groups.The δ -GLMB splitting uses the labels of tracks in the groupsto create several new δ -GLMB RFS. The splitted δ -GLMBdensities π ( j ) only approximate the original distribution π ( i ) ,because during the splitting, hypotheses containing labels oftwo different groups, are divided in new hypotheses, whichonly contain the labels according to the new distribution. Sincethe influence of tracks from different groups is marginal, theoccurred approximation error is negligible. H. Track Extraction
The track extraction decides whether a track with label (cid:96) exists or not by using the existence probability r ( (cid:96) ) . An LMBdensity π ( j ) = (cid:110)(cid:16) r ( (cid:96) ) , p ( (cid:96) ) (cid:17)(cid:111) (cid:96) ∈ L ( j ) (54)implicitly contains the existence probability. According to[12], the extraction of the track is ˆ X = (cid:110) (ˆ x, (cid:96) ) : r ( (cid:96) ) > ϑ (cid:111) , (55)where the parameter ϑ is an application specific threshold and ˆ x = arg x max p ( (cid:96) ) ( x ) .Since a δ -GLMB density does not contain the existenceprobability, the module has to calculate this value. Following[12], the existence probability for a track (cid:96) is given by r ( (cid:96) ) = (cid:88) ( I,θ ) ∈F ( L ) × Θ I w ( I,θ ) ( Z )1 I ( (cid:96) ) , (56)where w ( I,θ ) is the weight of the corresponding hypothesis ( I, θ ) . Afterwards, the extraction uses (55) to choose existingtracks. VII. R ESULTS
This section evaluates the ALMB filter and compares itwith the LMB filter [12] and the δ -GLMB filter [11]. For theevaluation, a gaussian mixture (GM) implementation is used.The scenario consists of two targets on a two dimensionalregion [ − , m × [ − , m. The target state x k = [ p x,k , ˙ p x,k , p y,k , ˙ p y,k ] T comprises the position andvelocity in x and y direction. Measurements are noisy vectors z k = [ z x,k , z y,k ] T . The clutter measurements are uniformly , − − − , y [ m ] LMB filter − , − − −
250 0 250 500 750 1 , − , − − − , x [m] y [ m ] ALMB filter
Fig. 2. Ground truth trajectories of the two objects (black lines withendposition marked by a triangle), the estimated trajectories (circles/squares)of the LMB filter (above) and the result of the ALMB filter (below). distributed over the measurement space and their numberfollows a Poisson distribution with mean value λ c .The state model is a standard constant velocity model wherethe standard deviation of the process noise for the velocity in x and y direction is given by σ v = 5 m / s . The cycle time ofthe sensor is T = 1 s and the standard deviation of the sensormeasurements consisting of x and y positions is σ ε = 10 m.The survival probability of the targets is state indepen-dent and given by p S,k = 0 . , the detection proba-bility is p D,k = 0 . . Furthermore, the birth densities aretwo multi-Bernoulli RFS π ( i ) B = { r ( i ) B ( z ) , p ( i ) B ( x | z ) } i =1 ,where r (1) B = r (2) B = 0 . , p ( i ) B = N ( x ; m ( i ) B , P B ) with m (1) B = [ − , , , T , m (2) B = [1000 , , , T and P B = diag ([10 , , , T ) .The thresholds for an automatic switching between an LMBand δ -GLMB representation are − for the Kullback-Leiblercriterion and . for the Entropy criterion. In a δ -GLMBdensity, the number of components is limited to and thepruning removes all components with a weight below − . Inan LMB density, all tracks with an existence probability below . are pruned. For both densities, the threshold in the trackextraction is set to . and an extracted track is representedby the gaussian component with the highest weight.Figure 2 shows the true trajectories together with the track-ing result of a single run. Obviously, the LMB filter can nothandle the situation in the region [ − , m × [0 , m. In
10 20 30 40 50 60 70 80 90 100020406080100120
Time k O SP A T [ m ] LMB δ -GLMBALMB Fig. 3. OSPAT distances of order p = 1 and cut-off c = 300 for GMimplementation with λ c = 50 and p D = 0 . (averaged over MC runs).
10 20 30 40 50 60 70 80 90 100012
Time k T i m e [ s ] LMB δ -GLMBALMB Fig. 4. Computation time of the LMB, δ -GLMB and ALMB filter (averagedover MC runs). this critical situation, the data association of tracks to measure-ments is uncertain and the LMB filter erroneously switches thetrack labels. In contrast, the ALMB filter successfully detectsthe critical situation using the criteria from Section V and usesthe δ -GLMB representation until the ambiguity is resolved.As a result, the ALMB filter can deal with such situations anddoes not switch the track labels.The OSPAT distances [21] in Fig. 3 illustrate the differencebetween the LMB, δ -GLMB and ALMB filters. In noncriticalsituations (time k < ), the performance of the LMB andALMB filter is identical since the ALMB filter uses the LMBrepresentation. However, LMB and ALMB perform slightlyworse than the δ -GLMB filter which is expected due to theapproximations in the update step. In challenging situationswith data association uncertaintities (time k > ∧ k < ),the ALMB and δ -GLMB filter outperform the LMB filter.Obviously, the ALMB filter can handle the situation due to thepropagation of multiple hypotheses in most of the Monte Carloruns. In contrast, the LMB filter loses too much informationdue to the LMB approximation and almost always switches thetrack labels. At time k = 91 , the OSPAT increases for all filtersdue to the disappearance of both tracks and the short delayuntil both tracks are abandoned. Figure 2 obviously illustratesthis fact. After a certain time, the estimation of the tracksmatches to the ground truth resulting in a declining OSPATdistance.Figure 4 shows the computation time of the compared filters.Obviously, the LMB and ALMB filter significantly outperformthe δ -GLMB. The execution time of the ALMB filter is almostthe same as of the LMB filter. Only in critical situations( k > ∧ k < ), the ALMB filter needs more time forthe calculation, but nevertheless, it outperforms the δ -GLMBfilter in such situations. In [22], a fast implementation of the δ -GLMB filter is proposed. This implemenation reduces theexecution time of the δ -GLMB filter, but would also speedup the ALMB filter in critical situations. The ALMB filterrepresents tracks by partitioned multi-Bernoulli RFSs resulting , − − −
250 0 250 500 750 1 , − , − − − , x [m] y [ m ] Fig. 5. Ground truth trajectories of a scenario with up to 16 objects, wherethe start position is marked by an circle and the end position by a triangle.
10 20 30 40 50 60 70 80 90 1000255075100125150175
Time k O SP A T [ m ] LMBALMB
Fig. 6. OSPAT distances of order p = 1 and cut-off c = 300 for GMimplementation with λ c = 25 and p D = 0 . (averaged over MC runs). from the merging and splitting module. Since the groups areassumed to be independent, the prediction, measurements,update, pruning and track extraction modules can be performedin parallel, which also speeds up the algorithm.In a second example, the performance of the ALMB filter isevaluated in a scenario with many targets. Figure 5 illustratesthe scenario with up to 16 objects involving birth and deathof objects and considering missed detections and clutter mea-surements. The OSPAT distances in Figure 6 show that theALMB filter always performs same or better than the LMBfilter. VIII. C
ONCLUSION
This paper has proposed a new efficient multi-target trackingfilter based on a Labeled Multi-Bernoulli and δ -GeneralizedMulti-Bernoulli filter. The proposed Adaptive Labeled Multi-Bernoulli filter combines a low computational complexity ofthe LMB filter with the accuracy of the δ -GLMB filter. Withthe Kullback-Leibler distance and an intuitive interpretationof the entropy, the filter uses simple mathematical tools todetect challenging situations in a tracking scenario. Since thecriteria do not depend on the representation of the spatialdistributions, the principles of the ALMB filter may also beused in sequential Monte Carlo implementations as well asthe recently published Gamma Gaussian Inverse Wishart im-plementation [20]. The modular structure of the ALMB filterfurther facilitates the replacement of individual componentsand the extension of the filter with new features. A CKNOWLEDGMENT
This work is supported by the German Research Founda-tion (DFG) within the Transregional Collaborative ResearchCenter SFB/TRR 62 ”Companion-Technology for CognitiveTechnical Systems”. R
EFERENCES[1] Y. Bar-Shalom and T. Fortmann,
Tracking and Data Association . Aca-demic Press, Inc., 1988.[2] D. Reid, “An algorithm for tracking multiple targets,”
IEEE Transactionson Automatic Control , vol. 24, no. 6, pp. 843–854, 12 1979.[3] R. Mahler,
Statistical Multisource-Multitarget Information Fusion .Artech House Inc., Norwood, 2007.[4] ——, “Multitarget Bayes filtering via first-order multitarget moments,”
IEEE Transactions on Aerospace and Electronic Systems , vol. 39, no. 4,pp. 1152–1178, 10 2003.[5] B.-N. Vo and W.-K. Ma, “The Gaussian mixture probability hypothesisdensity filter,”
IEEE Transactions on Signal Processing , vol. 54, no. 11,pp. 4091–4104, 11 2006.[6] H. Sidenbladh and S.-L. Wirkander, “Tracking random sets of vehiclesin terrain,” in
Conference on Computer Vision and Pattern RecognitionWorkshop , 2003, p. 98.[7] B.-N. Vo, S. Singh, and A. Doucet, “Sequential Monte Carlo methodsfor multitarget filtering with random finite sets,”
IEEE Transactions onAerospace and Electronic Systems , vol. 41, Issue 4, pp. 1224–1245,2005.[8] R. Mahler, “PHD filters of higher order in target number,”
IEEETransactions on Aerospace and Electronic Systems , vol. 43, no. 4, pp.1523–1543, 10 2007.[9] B.-T. Vo, B.-N. Vo, and A. Cantoni, “Analytic implementations of thecardinalized probability hypothesis density filter,”
IEEE Transactions onSignal Processing , vol. 55, no. 7, pp. 3553–3567, 7 2007.[10] ——, “The cardinality balanced multi-target multi-Bernoulli filter andits implementations,”
IEEE Transactions on Signal Processing , vol. 57,no. 2, pp. 409–423, 2 2009.[11] B.-T. Vo and B.-N. Vo, “Labeled random finite sets and multi-objectconjugate priors,”
IEEE Transactions on Signal Processing , vol. 61,no. 13, pp. 3460–3475, 2013.[12] S. Reuter, B.-T. Vo, B.-N. Vo, and K. Dietmayer, “The labeled multi-Bernoulli filter,”
IEEE Transactions on Signal Processing , vol. 62,no. 12, pp. 3246 – 3260, 2014.[13] S. Reuter, “Multi-object tracking using random finite sets,” Ph.D.dissertation, Ulm University, 2014.[14] F. Kunz, D. Nuss, J. Wiest, H. Deusch, S. Reuter, F. Gritschneder,A. Scheel, M. Stuebler, M. Bach, P. Hatzelmann, C. Wild, and K. Diet-mayer, “Autonomous driving at ulm university: A modular, robust, andsensor-independent fusion approach,” in
Intelligent Vehicles Symposium
Annals ofMathematical Statistics , vol. 22, no. 1, pp. 79–86, 3 1951.[18] C. E. Shannon, “A mathematical theory of communication,”
The BellSystem Technical Journal , vol. 27, no. 3, pp. 379–423, 3 1948.[19] S. Reuter, M. Beard, K. Granstr¨om, and K. Dietmayer, “Trackingextended targets in high clutter using a ggiw-lmb filter,” in
Sensor DataFusion: Trends, Solutions, Applications (SDF), 2015 , 10 2015, pp. 1–6.[20] M. Beard, S. Reuter, K. Granstr¨om, B. T. Vo, B. N. Vo, and A. Scheel,“Multiple extended target tracking with labeled random finite sets,”
IEEETransactions on Signal Processing , vol. 64, no. 7, pp. 1638–1653, April2016.[21] B. Ristic, B.-N. Vo, D. Clark, and B.-T. Vo, “A metric for performanceevaluation of multi-target tracking algorithms,”
IEEE Transactions onSignal Processing , vol. 59, no. 7, pp. 3452–3457, 7 2011.[22] H. G. Hoang, B.-T. Vo, and B.-N. Vo, “A generalized labeled multi-bernoulli filter implementation using gibbs sampling,” in