Complementary Lipschitz continuity results for the distribution of intersections or unions of independent random sets in finite discrete spaces
aa r X i v : . [ s t a t . O T ] A p r Complementary Lipschitz continuity resultsfor the distribution of intersections or unionsof independent random sets in finite discretespaces
John Klein
Univ. Lille, CNRS, Centrale Lille, UMR 9189 - CRIStAL - Centre de Rechercheen Informatique Signal et Automatique de Lille, F-59000 Lille
Abstract
We prove that intersections and unions of independent random setsin finite spaces achieve a form of Lipschitz continuity. More precisely,given the distribution of a random set Ξ, the function mapping anyrandom set distribution to the distribution of its intersection (underindependence assumption) with Ξ is Lipschitz continuous with unitLipschitz constant if the space of random set distributions is endowedwith a metric defined as the L k norm distance between inclusion func-tionals also known as commonalities. Moreover, the function mappingany random set distribution to the distribution of its union (underindependence assumption) with Ξ is Lipschitz continuous with unitLipschitz constant if the space of random set distributions is endowedwith a metric defined as the L k norm distance between hitting func-tionals also known as plausibilities.Using the epistemic random set interpretation of belief functions,we also discuss the ability of these distances to yield conflict measures.All the proofs in this paper are derived in the framework of Dempster-Shafer belief functions. Let alone the discussion on conflict measures,it is straightforward to transcribe the proofs into the general (nonnecessarily epistemic) random set terminology. Keywords : random sets, Lipschitz continuity, belief functions, distance,combination rules, information fusion, conflict, α -junctions.1 Introduction
When one is interested in a point-valued random variable but has accessto set-valued (imprecise) observations of this latter, Dempster [8] proposedto use a probabilistic model relying on multi-valued mappings. This modelwas further developed by Shafer [40] in a self-contained framework knowntoday as Dempster-Shafer theory, evidence theory or belief function theory.In this framework, the uncertainty on the value of the random variable isequivalently captured (among others) by set functions known as the belief,plausibility and commonality functions. These functions evaluate respec-tively how likely it is that the imprecise observations imply / are consistent/ are implied by some event. In this paper, we focus on the case where theseevents are disjunctions of elements of a finite and discrete space.Besides, belief functions are also known to be formally equivalent to ran-dom sets [32] and are interpretable as epistemic ones [3]. A random set isa random element whose realizations are set-valued. The probability massesgoverning the random set can also be uniquely characterized by set functionsthat are capacities [28] (non additive measures). Some of these set functionsare: • the containment functional that captures the probabilities that a givenset contains the random set, • the hitting functional (or capacity functional) that captures the prob-abilities that the random set intersects a given set, • the inclusion functional that captures the probabilities that a given setis included in the random set.The above functions are the respective random set terminology for the belief,plausibility and commonality functions.In the deterministic setting, one can observe a form of consistency betweenoperations like union or intersection with set-distances in the sense that, forsome of these distances, if one intersects (resp. unites) the same set with twoother ones, say X and X , then the obtained intersections (resp. unions)are at least as close as X and X were before.In this paper, we investigate if this observed consistency propagates tosome extent to the random setting. We prove that if one intersects (resp. These functions are in bijective correspondence [40]. and Ξ , then theobtained random intersections (resp. unions) have distributions that are atleast as close as the distributions of Ξ and Ξ were before. These results aredependent on the chosen metric for random set distributions. We examinemetrics that consist in L k norm based distances between either hitting orinclusion functionals. In addition, the results can be rephrased as Lipschitzcontinuity (with a unit Lipschitz constant) for functions that map any ran-dom set distribution to the distribution of its intersection (resp. union) witha given fixed random set.In the belief function framework, the closest related works are those ofLoudahi et al. [25, 26]. The authors proved that the consistency under studyholds between some belief function distances and combination operators thatyield the distribution of the intersection or union of independent randomsets. The results that we introduce in this paper involves distances thatare computationally more tractable than those introduced in [25, 26] butalso rely on independence assumptions. In the random set literature, manyresults regarding unions of i.i.d. random sets and random set metrics areavailable [30] but they do not address Lipschitz continuity. Also, we do notrequire the examined random sets to be identically distributed.Furthermore, building upon recent work from Pichon and Jousselme [35],we also investigate if our results can be instrumental to span new degrees ofconflict. We prove that the consistency of a distance with the conjunctiverule makes the corresponding conflict degree compliant with at least one re-quirement discussed in Destercke and Burger [11]. However, we also showthat several distances relying on an L k norm are not appropriate to yield adegree of conflict as suggested in [35] when k is finite. These last develop-ments focus on information fusion aspects of the belief function frameworkonly and do not generalize to non-epistemic random sets.This article is organized as follows: section 2 gives necessary backgroundon the theory of belief functions and random sets. Section 3 is an overviewof distances between random set distributions and the sought Lipschitz con-tinuity property is stated. Section 4 contains the main results of the paper,i.e. Lipschitz continuity for the distribution of intersection and union ofindependent random sets. Finally, in section 5, we make use of the afore-mentioned results to investigate if the examined distances can yield relevantdegrees of conflict in the belief function framework. All the proofs of thenewly introduced results are given in the appendices.Most of the paper is written using belief function terminology and usual3otations in this framework but it can be transcribed to the random setframework by merely switching the set function names as explained in thefirst paragraphs of this introduction. Special care was paid to allow easyreadability for readers familiar with any of these two frameworks. In this section, some mathematical notations for baseline belief function andrandom set concepts are given. The reader is expected to be familiar withone of these frameworks. More material on belief functions basics is foundfor instance in [40, 7] and on random sets in [30, 31, 4].Belief functions can be applied in the context of uncountable spaces [32,43, 31, 10] but a majority of results were derived in the finite case and wealso make this assumption in this article.
A random set in a finite and discrete space Ω = { ω t } nt =1 is a random elementwhose realizations are subsets of Ω. When one is interested in a point valuedvariable but has access to set valued (imprecise) observations, one can try toinfer the distribution of an epistemic random set [29]. Belief functions arein line with this epistemic interpretation. When one is interested in a setvalued variable and has access to corresponding samples, one can try to inferthe distribution of an ontic random set [28].Both types of uncertainty lead to formally equivalent objects althoughthese objects need occasionally to be processed and understood in differentways [3]. In the finite (and consequently countable) setting, the distributionof a random set Ξ i is a set function called mass function and is denoted by m i . The power set 2 Ω is the set of all subsets of Ω and it is the domain ofmass functions. For any A ∈ Ω , the cardinality of this set is denoted by | A | and we thus have | Ω | = n . The cardinality of 2 Ω is denoted by N = 2 n . Massfunctions have [0 ,
1] as co-domain and they sum to one: P A ∈ Ω m i ( A ) = 1.A focal element of a mass function m i is a set A ⊆ Ω such that m i ( A ) > A is called a categoricalmass function and is denoted by m A . A simple mass function is theconvex combination of m Ω with some categorical mass function m A .4everal alternative set functions are commonly used as equivalent char-acterizations of Ξ i . The belief , plausibility and commonality functionsof a set A are defined as bel i ( A ) = X E ⊆ A,E = ∅ m i ( E ) , (1) pl i ( A ) = X E ∩ A = ∅ m i ( E ) , (2) q i ( A ) = X E ⊇ A m i ( E ) (3)and respectively represent how much likely it is that A contains / intersects /is included in the underlying random set. In the random set literature, theseset functions are respectively referred to as the containment , hitting and inclusion functionals . When the empty set has a positive mass, anotherrepresentation is provided by implicability functions b i . These functionsare closely related to belief and plausibility functions through the followingrelations: ∀ A ∈ Ω , b i ( A ) = bel i ( A ) + m i ( ∅ ) , (4) b i ( A ) = 1 − pl i ( A c ) . (5)Another useful concept is the negation (or complement) m i of a massfunction m i introduced by Dubois and Prade [14]. The function m i is suchthat ∀ A ⊆ Ω, m i ( A ) = m i ( A c ) with A c = Ω \ A . The authors also providea result that will be instrumental in the proof of proposition 4. This resultreads b i ( A c ) = q i ( A ) , ∀ A ⊆ Ω , (6)where b i denotes the implicability function in correspondence with m i . Information fusion in the framework of belief functions is performed usingan operator mapping an arbitrary large set of input mass functions to asingle output mass function which summarizes all information contained inthe input ones. On top of this minimal requirement, the operator must alsofollow a certain policy in the way that the information encoded in the input5ass functions is processed to build the output one. There are two canonicaland dual such policies: conjunctivity and disjunctivity.Suppose ⊑ denotes an informational partial order [46, 14, 9] for massfunctions in the sense that one writes m ⊑ m if m contains at leastas much (epistemic) information as m . Following [13], a fusion operator isconjunctive if its output is more informative than any input. The conjunctiverule operator [44] denoted by ∩ (cid:13) is defined as follows m ∩ (cid:13) m ( E ) = X A,B ⊆ Ω A ∩ B = E m ( A ) m ( B ) , ∀ E ⊆ Ω . (7)The conjunctive rule is associative and commutative and the generalizationof the above expression to more than two input mass functions is immediate.This rule is the unnormalized version of Dempster’s rule [8] and on the ran-dom set side, it can be understood as the distribution of the intersection oftwo independent random sets. Obviously, this operator is a generalization ofthe set intersection as we have m A ∩ (cid:13) m B = m A ∩ B for any two subsets A and B of Ω. For the sake of equation concision we adopt the following notation m ∩ = m ∩ (cid:13) m . This combination is very simple to compute when dealingwith commonality functions: q ∩ ( A ) = q ( A ) q ( A ) , ∀ A ⊆ Ω . (8)It can be easily proved that the conjunctive rule is conjunctive. Considerthe informational partial order based on commonalities ⊑ q which is definedas m ⊑ q m ⇔ q ( E ) ≤ q ( E ) , ∀ E ⊆ Ω . (9)Since q ∩ is the elementwise multiplication of q and q , we obtain m ∩ ⊑ q m and m ∩ ⊑ q m .When m = m E , i.e. m is categorical, the result of the conjunctivecombination between m and m is referred to as the conditioning of m given E because this operation is a generalization of probabilistic conditioning There are several notions of independence for random sets [4, chapter 2]. In this paper,we only consider the usual probabilistic notion, i.e. joint distributions factorizing as theproduct of their marginals. If focal elements of m are singletons, i.e. the random set is point valued, thenDempster’s conditioning coincides with Bayes rule: m | E ( A ) = m ( A ) m ( E ) . m ∩ (cid:13) m E is also denoted by m | E . The followingproperty of the implicability function w.r.t. conditioning will be instrumentalin some proofs: Lemma 1.
For any mass function m , any categorical mass function m E and any subset A ⊆ Ω , we have b | E ( A ) = b (( E \ A ) c ) = b ( E c ∪ A ) . (10)To the best of our knowledge, this property is not reported in the belieffunction literature, we thus provide a proof in B.As for disjunction, the output is required to be less informative than anyinput and it is thus considered as an extremely conservative fusion policy.The disjunctive rule operator [44] denoted by ∪ (cid:13) is defined as follows m ∪ (cid:13) m ( E ) = X A,B ⊆ Ω A ∪ B = E m ( A ) m ( B ) , ∀ E ⊆ Ω . (11)The disjunctive rule is also associative and commutative and it is a general-ization of set union as we have m A ∪ (cid:13) m B = m A ∪ B for any two subsets A and B of Ω. We denote by m ∪ the result of the following combination: m ∪ (cid:13) m .On the random set side, m ∪ is understood as the distribution of the unionof two independent random sets. The disjunctive combination is very simpleto compute when dealing with implicability functions: b ∪ ( A ) = b ( A ) b ( A ) , ∀ A ⊆ Ω . (12)The disjunctivity of this rule can be proved using the partial order based onimplicabilities ⊑ b . This latter reads m ⊑ b m ⇔ b ( E ) ≥ b ( E ) , ∀ E ⊆ Ω . (13)Since b ∪ is the elementwise multiplication of b and b , we obtain the desiredconclusion.The disjunctive rule is related to the conjunctive rule by the following DeMorgan relation [14]: for any mass functions m and m m ∩ (cid:13) m = m ∪ (cid:13) m . (14)7n K, we mention a more general family of combination rules for belieffunctions which encompasses the conjunctive and disjunctive rules. Theserules are known as α -junctions [41]. Since α -junctions have a more limitedimpact in the belief function literature than the conjunctive and disjunctiverules, we chose not to mention them in the main body of this article. However,the results that we prove in the next sections do propagate to α -junctions.See K for the corresponding proofs. Mass functions can be viewed as vectors belonging to the vector space R N with categorical mass functions as base vectors. Since mass functions sum toone, the set of mass functions is the simplex M in that vector space whosevertices are the base vectors { m A } A ⊆ Ω . This simplex is also called massspace [5] and has finite Lebesgue measure but contains uncountably manymass functions.Embedding mass functions in a vector space is particularly useful whencomputing either m ∩ or m ∪ because they can be obtained as the dotproduct of some matrix with one of the input mass functions (seen as acolumn vector) [42]. Each such matrix is in one-to-one correspondence withthe other mass function. The vector form of any set function will be denotedusing bold characters, for instance, the vector form of a mass function m i isdenoted by m i .Let S denote the specialization matrix [14] in bijective correspondencewith m . Each entry of S is given by S ( A, B ) = m | B ( A ). From a geometricpoint of view [6], each column of S corresponds to the vertex of a polytope P , called the conditional subspace of m . Any mass function m ∈ P is the result of the combination of m with another mass function using ∩ (cid:13) .Most importantly, for any mass functions m and m , we have m ∩ = S · m . (15)Let G denote the generalization matrix in bijective correspondencewith m . Each entry of G is given by G ( A, B ) = m ∪ B ( A ). For any massfunctions m and m , we have m ∪ = G · m . (16)8here are also transfer matrices allowing to turn mass functions in com-monality or implicability functions using a right-handed dot product. Theyare presented in more details in K. In this section, we will first recall the definitions of some existing distancesbetween mass functions. We focus on (full) metrics and do not discuss dissim-ilarities [45, 47] which have fewer baseline properties as compared to metrics.The Lipschitz continuity property that we seek will then be stated and itsdesirability will be justified by analyzing set-distances.
A distance, or metric, provides a positive real value assessing the discrepan-cies between two elements. Let us first give a general definition of such anapplication when the compared vectors are mass functions:
Definition 1.
Given a domain Ω and its related mass space M , a mapping d : M × M −→ [0 , a ] with a ∈ R + is a distance between two mass functions m and m defined on Ω if the following properties hold: • Symmetry : d ( m , m ) = d ( m , m ), • Definiteness : d ( m , m ) = 0 ⇔ m = m , • Triangle inequality : d ( m , m ) ≤ d ( m , m ) + d ( m , m ).If the mapping fails to possess some of the above properties, then it de-grades into unnormalized distance, dissimilarity or pseudo-distance. Onlyfull metrics are able to provide a positive finite value that matches the intu-itive notion of gap between elements of a given space.If a = + ∞ , then the distance is bounded and if in addition a = 1, the distanceis normalized . Provided that a mass function distance d is bounded, this This term was used by Frechet [15] in his early works on metric spaces, i.e. spacesendowed with a distance. ρ = max A,B ∈ Ω d ( m A , m B )which is the diameter of M [26].The most popular metric in the belief function literature is Jousselmedistance [17]. It is based on an inner product relying on a similarity matrix.This distance is given by: d J ( m , m ) = r
12 ( m − m ) T · D · ( m − m ) , (17)where m i denotes the column vector version of mass function m i and D isthe Jaccard similarity matrix [16] between focal elements. Its componentsare: D ( A, B ) = ( A = B = ∅ | A ∩ B || A ∪ B | otherwise . (18)Thanks to the matrix D , Jousselme distance takes into account the depen-dencies between the base vectors of M . Consequently, the poset structure of (cid:0) Ω , ⊑ (cid:1) has an impact on distance values, allowing a better match with theuser’s expectations.Many other mass function distances are defined similarly by substitutingmatrix D with another matrix evaluating the similarity between base vectorsin different ways [12, 5]. Experimental material in [18] shows that thesedistances are highly correlated to d J .Observe that the aforementioned distances are the L norm of the differ-ence of two vectors which are obtained by applying the same linear mappingto each mass function under comparison. We can thus build other distancesby resorting to other norms. In particular, when the linear mapping maps amass function to its corresponding plausibility, commonality or implicabilityfunction , we obtain distances that will be instrumental in the sequel of thispaper. The formal definition of these distances follows. Definition 2.
For some family f ∈ { q, bel, pl, b } of set functions in bijectivecorrespondence with mass functions, an L k norm based f -distance d f,k isthe following mapping: d f,k : M × M → [0 , , ( m , m ) → ρ k f − f k k . See [42] for the definition of this linear mappings. i is the vector representation of the set function f i (in correspondence with m i ) and ρ is a normalization factor given by ρ = max A,B ∈ Ω k f A − f B k k . For any vector f ∈ R N , its L k norm are given by: k f k k = X A ⊆ Ω | f ( A ) | k ! k . (19)Given relation (5), we see that d pl,k = d b,k for any k . Consequently,we do not further mention distances between implicability functions in thesequel of this article. We end this subsection with a small result givingclosed form expressions for constant ρ for distances between plausibilitiesand commonalities. Lemma 2.
For L k norm based distance between commonality or plausibilityfunctions, we have ρ = ( ( N − /k if k < ∞ if k = ∞ . (20) Proof. (sketch) Given proposition 2 in [20], for any of the distances evokedin the lemma, we have max
A,B ∈ Ω d ( m A , m B ) = d ( m Ω , m ∅ ) . (21)Finally, for f ∈ { q, pl } , we always have | f Ω ( A ) − f ∅ ( A ) | = 1 if A = ∅ and | f Ω ( ∅ ) − f ∅ ( ∅ ) | = 0. Since specialization and generalization matrices are also in bijective corre-spondence with mass functions, we can use the same recipe as in definition2 to build new mass function distances. The only difference is that massfunctions are mapped to matrices and one must thus resort to matrix normsinstead of vector norms. Such distances were first introduced in [25, 26]. Asubset of these distances are defined as follows:11 efinition 3.
The L k norm based specialization distance d spe ,k is thefollowing mapping: d spe ,k : M × M → [0 , , ( m , m ) → ρ k S − S k k . S i is the specialization matrix in correspondence with m i and ρ is a normal-ization factor given by ρ = ( (2 ( N − /k if k < ∞ k = ∞ For any matrix F ∈ R N , its L k norm is given by: k F k k = X A,B ⊆ Ω | F ( A, B ) | k ! k . (22)It was proved in [26] that if we use generalization matrices in the sameway as the above definition, we obtain a distance that coincides with thespecialization distance.Other matrix norms were investigated in [25, 26], i.e. operator norms.These norms lead to mass function distances that have fewer desirable prop-erties as compared to L k matrix norm based ones. They are thus not men-tioned in this article. There are two main types of metrics between sets [22]: those accounting forhow many elements are shared by the subsets and those that also account forthe number of elements that they do not share. Examples of each categoryare the following: • the Jaccard distance d jac ( A, B ) = ( A = B = ∅ − D ( A, B ) = | A ∆ B || A ∪ B | otherwise , These distances are not consistent with informational partial orders that generalizeset inclusion. See [20] for a definition of the consistency of mass function distances withpartial orders. the (normalized) Hamming set distance d ham ( A, B ) = | A ∆ B | n ,where ∆ denotes the set symmetric difference.The Jaccard distance belongs to the first type of metric while the Ham-ming distance belongs to the second one. The following example illustratestheir main difference. Example 1.
In this example, we replace subsets by their binary representa-tions, i.e. A = 0011 means that n = 4 and the elements of A are the thirdand fourth elements of Ω. We have d jac (0011 , d ham (0011 , , (23)while d jac (00011 , d ham (00011 , . (24)The hamming distance decreases as a fifth element is contained in Ω andthus this distance depends on elements that the subset do not share.We can wonder how these distances interact with set operations like in-tersection and union. Actually, the nature of these interactions are highlydependent on how one wishes to perform information fusion using eitherintersections or unions: • Suppose subsets represent a collection of candidate contents that aclassifier must assign to an input image. If we want to evaluate if twoimages have similar contents, we can use a set distance between theirimprecise tags. Suppose image a is tagged as { cat or dog } and image b istagged as { dog or bike } . If we learn from a second classifier that bothimages contain pictures of an animal, then we deduce that the imagecontents are more likely to be closer after inserting this information.More formally, if one intersects both A and B with a third party subset C , the result of these intersections cannot be more distant than A and B were initially, which reads(a) d ( A ∩ C, B ∩ C ) ≤ d ( A, B ) . • Suppose subsets are lists of attributes of some streaming video serviceusers. Suppose that it is known that user a likes action movies, is amale and lives in the US. Suppose user b likes comedies, is a male and13ives in the UK. Suppose we learn that both of them also like sciencefiction movies, then we deduce that these users have closer profiles thanpreviously thought. More formally, if one unites both A and B with athird party subset C , the result of these unions cannot be more distantthan A and B were initially, which reads(b) d ( A ∪ C, B ∪ C ) ≤ d ( A, B ) . Observe that in both of these examples, one adopts a conjunctive informa-tion fusion policy in the sense that aggregation results are more informativethan each input. The conjunctive/disjunctive nature of an operator (likeintersection or union) depends on the type of underlying uncertainty. Whatthese examples are meant to highlight is that, should you intend to combinethe informative content of subsets using either intersections or unions, thena set distance should comply to either (a) or (b) in order to translate in thenumerical distance values that informative contents are more similar afterfusion.It can be proved that the Hamming set distance verifies (a) and (b) whilethe Jaccard distance verifies (b) only, c.f. A for more details. From theapplicative contexts of the above examples, we see that the desirability ofproperty (a) or (b) depends on the information fusion operator. Outside thescope of information fusion, we may not require any of these properties fora set metric. When one intends to perform information fusion with randomsets, it makes sense to wonder if some mass function distances can general-ize these properties with respect to information fusion operators defined forthem.
One way to generalize the properties (a) or (b) to random sets is stated bythe following property:
Definition 4.
Let ∗ be a combination operator and d a mass function dis-tance. d is said to be consistent with respect to ∗ if any of the followingconditions is verified:(i) for any mass functions m , m and m on Ω: d ( m ∗ m, m ∗ m ) ≤ d ( m , m ) . (25)14ii) for any mass function m , the mapping F m : M −→ M of the form F m ( m ) = m ∗ m (26)is Lipschitz continuous with 1 as Lipschitz constant.The equivalence between the two conditions follows from the very defini-tion of Lipschitz continuity with 1 as Lipschitz constant. Indeed, for mapping F m to qualify as such, it means that we have d ( F m ( m ) , F m ( m )) ≤ d ( m , m )for any pair of mass functions ( m , m ). In the remainder of this article, wewill refer to this property either as consistency property between an operatorand distance or, by small abuse of language, as ofan operator w.r.t. to a distance.Under this property, repeated combinations with a given mass function m cannot pull away any pair of mass functions. Such mappings are also callednon expansive or short maps. Lipschitz continuity is stronger than uniformcontinuity. In particular, it implies a form of regularity for the correspondingcombination mechanism in the sense that the norm of its gradient is boundedby 1 meaning that the combined mass function does not change very fast orwiggle in the vicinity of functions m or m .From an informative content standpoint, this property also has an impact.Suppose a mass function m is separable , i.e. the combination under rule ∗ ofelementary pieces of information embodied by simple mass functions yieldsfunction m . Using a consistent distance w.r.t. ∗ , mass functions are all thecloser as their decompositions involve identical elementary components.Proving that a fusion operator achieves Lipschitz continuity is not triv-ial because M is not finite but instead a compact subset of an uncountablespace. In [25], Loudahi et al. established the consistency of the L and L ∞ based specialization distances w.r.t. the conjunctive and disjunctive rules.Numerical experiments also show that the L based specialization distance isnot consistent w.r.t. the conjunctive or disjunctive rule in the sense of defini-tion 4. The experiments also show that Jousselme distance is not consistent Shafer [40] introduced this terminology for decompositions w.r.t. Dempster’s rule butwe understand it in a more general perspective here by considering decompositions w.r.t.some arbitrary rule ∗ . In this section, we provide new Lipschitz continuity results of L k norm baseddistances between commonality or plausibility functions with the conjunctiveand disjunctive rules. Proposition 1.
For ≤ k ≤ ∞ , ∩ (cid:13) is 1-Lipschitz continuous w.r.t. the L k norm based q -distance d q,k . See C for proof.
Proposition 2. ∩ (cid:13) is 1-Lipschitz continuous w.r.t. the L ∞ norm based pl -distance d pl, ∞ . See D for proof.As compared to previous Lipschitz continuity results [25, 26], specializa-tion distances have a greater time complexity as compared to commonalityones. Indeed, although the construction of specialization distances can besped up [24], the time complexity for the specialization distance is quadraticin N . More precisely, the time complexity to build a specialization matrix is O (cid:16) N log(3)log(2) (cid:17) ≈ O ( N . ). However, computing the norm of such a matrix hastime complexity O ( N ). The time complexity to compute a commonalityfunction [19] is O ( N log ( N )) while that of computing the norm of common-ality is O ( N ). Given relations (5) and (6), the time complexity to computea plausibility function is identical to that of commonality ones. Moreover,the memory complexity is obviously reduced as well.16 .2 Lipschitz continuity for the conjunctive rule: nu-merical experiments and counter-examples This subsection contains experiments illustrating the (in)consistency of sev-eral mass function distances with respect to the conjunctive rule. We gener-ate randomly [2] 1 e ∗ = ∩ (cid:13) ) for several distances. Thecorresponding success rates are reported in Table 1.Table 1: Consistency rates for several mass function distances w.r.t. ∩ (cid:13) Distance d J d q, d q, d q, ∞ d pl, d pl, d pl, ∞ d spe Consistency rate 86 .
42% 100% 100% 100% 38 .
22% 63 .
60% 100% 100%
The results are compliant with proposition 1 and 2 as all commonalitydistances and d pl, ∞ achieve 100% of success. The results also show thatJousselme distance and L or L norm based distances between plausibilitiesare not consistent with ∩ (cid:13) . The rates also show that the circumstances inwhich Lipschitz continuity does not hold for these distances are not rareevents.To get a better insight as to why d pl,k is not consistent with ∩ (cid:13) when k isfinite, we provide the following counter-example: Example 2.
Let Ω = { a, b, c } . Suppose m = m { a,b } + m Ω , m = m { a,c } + m Ω and m = m { b } . By conjunctive combination, we obtain m ∩ (cid:13) m = m { b } , (27)and m ∩ (cid:13) m = 12 m { b } + 12 m ∅ . (28)The plausibilities are ∅ { a } { b } { a, b } { c } { a, c } { b, c } Ω pl pl pl ∩ pl ∩
12 12
12 12
17e see that d pl,k ( m , m ) = 1 ρ (cid:18) (cid:19) k ! /k (29)while d pl,k ( m ∩ (cid:13) m , m ∩ (cid:13) m ) = 1 ρ (cid:18) (cid:19) k ! /k . (30)This counter-example tends to show that the inconsistency of these distanceslies (at least partially) in the way that the mass of the empty set is assignedby the conjunctive rule. Proposition 3.
For ≤ k ≤ ∞ , ∪ (cid:13) is 1-Lipschitz continuous w.r.t. the L k norm based pl -distance d pl,k . See E for proof.
Proposition 4. ∪ (cid:13) is 1-Lipschitz continuous w.r.t the L ∞ norm based q -distance d q, ∞ . See F for proof.The same type of arguments outlining the added value of our new Lip-schitz continuity results in the conjunctive case also hold in the disjunctiveone. The distances between plausibilities or commonalities have a smallertime and memory complexities as compared to the specialization distances.It must be noted that d spe , , d spe , ∞ , d q, ∞ and d pl, ∞ are the only distances thatare reported to be consistent with both the conjunctive and disjunctive rules.As the numerical experiments presented in the next paragraph will show, L k norm based distances between commonalities are not consistent with ∪ (cid:13) when k is finite. The counter-example presented in 4.2 proves that L k norm baseddistances between plausibilities are not consistent with ∩ (cid:13) when k is finite. This subsection contains experiments illustrating the (in)consistency of sev-eral mass function distances with respect to the disjunctive rule. We gener-18te randomly [2] 1 e ∗ = ∪ (cid:13) ) for several distances. Thecorresponding success rates are reported in Table 2.Table 2: Consistency rates for several mass function distances w.r.t. ∪ (cid:13) Distance d J d q, d q, d q, ∞ d pl, d pl, d pl, ∞ d spe Consistency rate 100% 94 .
76% 94 .
09% 100% 100% 100% 100% 100%
The results are compliant with proposition 3 and 4 as all plausibilitydistances and d q, ∞ achieve 100% of success. The results also show that L or L norm based distances between commonalities are not consistent with ∪ (cid:13) . The consistency of Jousselme distance can be conjectured. This distancealso achieves 100% of success if one draws random mass functions and notjust random simple mass functions.To get a better insight as to why d q,k is not consistent with ∪ (cid:13) when k isfinite, we provide the following counter-example: Example 3.
Let Ω = { a, b, c } . Suppose m = m { a } , m = m { a,c } and m = m { b } . By disjunctive combination, we obtain m ∪ (cid:13) m = m { a,b } , (31)and m ∪ (cid:13) m = m Ω . (32)The commonalities are ∅ { a } { b } { a, b } { c } { a, c } { b, c } Ω q q q ∪ q ∪ d q,k ( m , m ) = 1 ρ (2) /k (33)while d q,k ( m ∪ (cid:13) m , m ∪ (cid:13) m ) = 1 ρ (4) /k . (34)19 Conflict degrees spanned by consistent dis-tances with the conjunctive rule
When information sources support antagonistic assumptions, it is importantto provide a way to numerically assess the level of inconsistency in theirrespective messages. This is the purpose of degrees of conflict defined in theframework of belief functions. If such a degree of conflict is bounded, wecan use different information fusion strategies in order to make more robustdecisions.In the theory of belief functions, such a situation typically occurs whenthere is a pair of subsets (
A, B ) such that A ∩ B = ∅ and m ( A ) > m ( B ) >
0. In the following paragraphs, we give a brief reminder of existingconflict degrees in the belief function literature as well as desirable propertiesfor such degrees. Next, we also comment on the advisability of building newdegrees using distances that are consistent with ∩ (cid:13) . In his pioneering article, Dempster [8] already provides a way to assess thedegree of conflict between two mass functions. Let κ denote this criterionwhich is known as Dempster’s degree of conflict and reads κ ( m , m ) = m ∩ ( ∅ ) . (35)More recently, Destercke and Burger [11] outline that this degree can bebuilt upon a consistency measure φ which evaluates to what extent a singlemass function is not self-contradictory. In the case of Dempster’s degree ofconflict, this measure is simply given by φ ( m ) = 1 − m ( ∅ ) . (36)They also introduce the following strong consistency measure Φ which issuch that Φ ( m ) = max a ∈ Ω pl ( { a } ) . (37)This second measure is the L ∞ norm of the contour function . It is strongerin the sense that φ ( m ) < ⇒ Φ ( m ) < The contour function is the restriction of the plausibility function to singletons. C :(i) (extreme conflict values) C ( m , m ) = 0 iff m and m are non con-flicting and C ( m , m ) = 1 iff m ∩ (cid:13) m = m ∅ ,(ii) (symmetry) C ( m , m ) = C ( m , m ),(iii) (imprecision monotonicity) if m ⊑ m ′ then C ( m ′ , m ) ≤ C ( m , m ),(iv) (ignorance is bliss) C ( m , m Ω ) = 1 − I ( m ) where I is a consistencymeasure such as the aforementioned ones,(v) (invariance to refinement) for some multi-valued mapping ρ : Ω → Θ with | Ω | < | Θ | < ∞ and a mass function m ′ such that m ′ (cid:18) ∪ a ∈ A ρ ( a ) (cid:19) = m ( A ) for any A ⊆ Ω, we have C ( m , m ) = C ( m ′ , m ′ ).The definition of non-conflicting mass functions is not specified in prop-erty (i) because several such notions can be considered. The authors ex-plain that if non-conflict means that the intersection of any focal elementof m with any focal element of m is not empty then κ satisfies eachproperty with I = φ . Moreover, if non-conflict means that the intersec-tion of all the focal elements of both mass functions is not empty then K ( m , m ) = 1 − max a ∈ Ω pl ∩ ( { a } ) satisfies each property with I = Φ. Wewill refer to K as the degree of strong conflict . The informational partialorder in property (iii) is the specialization partial order [14] for both degrees κ and K . Prior to Destercke and Burger [11], several authors [27, 23] proposed to derivenew degrees of conflict to overcome the limitations of κ . Indeed, Dempster’sdegree of conflict evaluates two pairs of mass functions as equally conflictingas long as they assign the same mass to ∅ (after their respective conjunctivecombinations). Let m ∩ denote the conjunctive combination of the first pairand m ′∩ the combination of the second one. Suppose the focal elements of21 ∩ are {∅ , A } and those of m ′∩ are {∅ , A, B } . If A ∩ B = ∅ , then m ′∩ carriesintuitively a higher level of inconsistency which κ fails to grasp.The degrees of conflict introduced in [27, 23] are built using pairwisedistances d ( m , m ). There are however several arguments [11, 1] outliningthat this practice is ill advised. However, in a recent work, Pichon andJousselme [35] highlighted that non-pairwise distances can be instrumentalin the construction of degrees of conflict. The authors examine the distancebetween the conjunctive combination m ∩ and some reference mass function,i.e. the total conflict mass function m ∅ . Indeed we have κ ( m , m ) = 1 − d pl, ∞ ( m ∩ , m ∅ ) . (38)Similarly, K can be retrieved as the L ∞ norm based distances between thecontour functions of m and m . This observation raises the following ques-tion: can we build other relevant degrees of conflict in the same fashion as in(38) but using other distances than d pl, ∞ ? We try to provide some answersto this question in the next paragraphs when the examined distances areconsistent with ∩ (cid:13) . Proposition 5.
Let d denote a mass function distance which is either an L k norm based distance between commonalities, plausibilities, or specializationmatrices. Let C : M × M → [0; 1] denote the following mapping C ( m , m ) = 1 − d ( m ∩ , m ∅ ) . (39) Then C does not satisfy property (i) if k is finite. See G for a proof.From the above result, building a conflict degree using (39) using d q,k , d pl,k or d spe ,k is ill-advised whenever k = ∞ . Intuitively, degrees of conflictrelying on L ∞ are better candidates to verify (i) because the maximal normvalue is not uniquely achieved for m ∩ = m Ω . Proposition 6.
Let C q, ∞ : M × M → [0; 1] denote the following mapping C q, ∞ ( m , m ) = 1 − d q, ∞ ( m ∩ , m ∅ ) . (40) Then C q, ∞ coincides with the degree of strong conflict K . roposition 7. Let C spe , ∞ : M × M → [0; 1] denote the following mapping C spe , ∞ ( m , m ) = 1 − d spe , ∞ ( m ∩ , m ∅ ) . (41) Then C spe , ∞ coincides with Dempster’s degree of conflict κ . See H and I for proofs.Proposition 6 shows that the degree of strong conflict is retrieved throughthe L ∞ norm based commonality distance while remark 7 shows that Demp-ster’s degree of conflict is retrieved through the L ∞ norm based specializationdistance which are complementary observations in line with [35]. We con-tinue with another more general remark, i.e. outside the sole scope of a givenfamily of mass function distances. Proposition 8.
Let d denote a mass function distance which is consistentw.r.t. ∩ (cid:13) . Let C : M × M → [0; 1] denote the mapping defined from (39) .Then C satisfies property (iii) w.r.t. the Dempsterian partial order ⊑ d . See J for a proof. Following proposition 8, it seems that in general,distances consistent w.r.t. ∩ (cid:13) are good candidates to possibly yield a relevantconflict degree. In the scope of the theory of belief functions, this paper provides new resultson the consistency of L k norm based distances between commonalities andthe L ∞ norm based distance between plausibilities with the conjunctive ruleof combination. We also prove the consistency of L k norm based distancesbetween plausibilities and the L ∞ norm based distance between common-alities with the disjunctive rule of combination. The investigated form ofconsistency is equivalent to Lipschitz continuity of the mapping obtained byfixing one of the operand of pairwise combinations under these rules. Sincethe corresponding Lipschitz constant is 1, this property means that com-bining any pair of belief functions with any third party belief function is anon-expansive operation.Outside the scope of belief functions, the results apply to random setdistributions as belief functions can be interpreted as epistemic random sets.In this more general context, the conjunctive rule yields the distributionof the intersection of two independent random sets while the disjunctive23ule yields the distribution of the union of two independent random sets.Commonalities map any subset A to the probability that the random set is asuperset of A (inclusion functional). Plausibilities map any subset A to theprobability that the random set intersects A (hitting functional). Our resultsprove that if F maps the distribution of a random set to the distribution ofthe intersection of this random set with a given (fixed) independent one,then F is Lipschitz continuous with Lipschitz constant 1 w.r.t. L k normbased distances between inclusion functionals or the L ∞ norm based distancebetween hitting functionals. Similarly, if F maps the distribution of a randomset to the distribution of the union of this random set with a given (fixed)independent one, then F is Lipschitz continuous with Lipschitz constant 1w.r.t. L k norm based distances between hitting functionals or the L ∞ normbased distance between inclusion functionals.We only investigate belief functions and random sets on finite spaces.Extending these results to uncountable spaces is an important perspectivefor future works. In the uncountable setting, random closed sets are definedas measurable mappings with respect to the Effros σ -algebra on the familyof closed subsets of some locally compact Haussdorf completely separabletopological space. The main results obtained in the finite case essentiallyrely on two aspects:(i) intersection (resp. union) of independent random sets can be char-acterized by the elementwise multiplication of their inclusion (resp.containment) functionals,(ii) a closed form expression of L k norms for inclusion or hitting functionals.Concerning the first aspect, the fact that ( A ⊇ C and B ⊇ C ) ⇔ A ∩ B ⊇ C and ( A ⊆ C and B ⊆ C ) ⇔ A ∪ B ⊆ C is intuitively sufficient to obtainequivalent relations in uncountable spaces. The relation for unions of inde-pendent random closed sets is indirectly evoked for containment functionalsin [30, p.82]. The second aspects seems more challenging to generalize be-cause one needs to introduce a norm for capacity functionals that is not just avector norm. It may be possible to build such norms using Choquet integrals.Another relevant research track for future works consists in investigatingif the proposed Liptschitz continuity results hold as well when independenceassumptions are not verified. Intuitively intersecting or uniting some pair ofrandom sets with a third one that may or may not be dependent on eitherof them should still make their corresponding distributions closer (w.r.t. the24ppropriate distance). The difficulty in this regard is that one can no longerjust work with marginal inclusion or containment functionals but with mul-tivariate ones [36] that do not factorize as elementwise products of marginalfunctionals. Recent work [37, 38, 39] generalizing Sklar’s theorem on copulasto joint or multivariate capacity functionals of random sets may be usefulin this quest because it gives an explicit connection between marginal func-tionals and multivariate ones. However, in contrast to point-valued randomvariables, a family of copulas is necessary to characterize this link.Finally, going back to belief functions, we also discuss the advisability ofbuilding new degrees of conflict using distances that are consistent with theconjunctive rule. Such degrees are mainly interesting in information fusionapplications of belief functions. We show that distances consistent with theconjunctive rule can be deemed to be relevant candidates for this purposeas they will achieve a desirable property for degrees of conflict. As for thedistances for which we provide new consistency results (distance betweencommonalities or plausibilities), it turns out that they either violate anotherdesirable property or coincide with an already known degree of conflict. Acknowledgments
The author is indebted to Frederic Pichon for the useful discussions on de-grees of conflict and his remarks that helped him clarifying the meaning ofthe consistency property.
A Set metrics and their consistency with setoperations
This appendix is meant to show that some distances between sets verify eitherproperty (a) or (b), see subsection 3.3 for the definitions of these latter. Weexamine, the Hamming distance and the Jaccard distance:25
Hamming distance: for any subsets
A, B and C , we have d ham ( A ∩ C, B ∩ C ) = | ( A ∩ C ) ∆ ( B ∩ C ) | n (42)= | ( A ∆ B ) ∩ C | n (43) ≤ | A ∆ B | n . (44)So we see that d ham verifies (a).We can also write d ham ( A ∪ C, B ∪ C ) = | ( A ∪ C ) ∆ ( B ∪ C ) | n (45)= | ( A ∪ B ∪ C ) \ (( A ∪ C ) ∩ ( B ∪ C )) | n (46)= | ( A ∪ B ∪ C ) \ (( A ∩ B ) ∪ C ) | n (47)= | (( A \ C ) ∪ ( B \ C )) \ ( A ∩ B ) | n (48) ≤ | ( A ∪ B ) \ ( A ∩ B ) | n (49) ≤ | A ∆ B | n , (50)and d ham verifies (b) as well. • Jaccard distance: for any subsets
A, B and C , we have d jac ( A ∪ C, B ∪ C ) = | ( A ∪ C ) ∆ ( B ∪ C ) || ( A ∪ C ) ∪ ( B ∪ C ) | (51)= | A ∆ B | − | ( A ∆ B ) \ C || A ∪ B | + | C \ ( A ∪ B ) | (52) ≤ | A ∆ B || A ∪ B | (53)So we see that d jac verifies (b).To see that d jac does not verify property (a), we provide the followingcounter-example. 26 xample 4. Let A and B denote two subsets that are not disjoint, i.e. A ∩ B = ∅ and therefore d jac ( A, B ) = 1 − | A ∩ B || A ∪ B | < C = A ∆ B , then A ∩ C = A \ B and B ∩ C = B \ A . This alsoimplies that ( A ∩ C ) ∆ ( B ∩ C ) = A ∆ B because A ∩ C and B ∩ C aredisjoint. We also have ( A ∩ C ) ∪ ( B ∩ C ) = A ∆ B and consequently d jac ( A ∩ C, B ∩ C ) = 1. B Proof of lemma 1
In this appendix, we give a proof that b | E ( A ) = b (( E \ A ) c ) for any impli-cability function b and any subset A and E . Proof.
By definition of the implicability function and conditioning, we havefor any A ⊆ Ω b | E ( A ) = X B ⊆ A X C ⊆ Ωs.t. C ∩ E = B m ( C ) . (54)The second sum is empty if E is not a superset of B . If this condition isverified, we remark that subsets C are necessarily the union of B and somesubset of E c . This gives b | E ( A ) = X B ⊆ A ∩ E X D ⊆ E c m ( B ∪ D ) . (55)Finally, any subset X of ( E \ A ) c can be partitioned w.r.t. A ∩ E and E c ,meaning that ∃ ! Y ⊆ A ∩ E and ∃ ! Y ′ ⊆ E c such that X = Y ∪ Y ′ . Conse-quently, we have b | E ( A ) = X X ⊆ ( E \ A ) c m ( X ) (56)= b (( E \ A ) c ) (57)27 Proof of proposition 1
In this appendix, we give a proof that for 1 ≤ k ≤ ∞ , ∩ (cid:13) is 1-Lipschitzcontinuous w.r.t. the L k norm based q -distance d q,k . Proof.
Suppose m , m and m are three mass functions on Ω and q , q and q are their respective commonality vectors. For any positive finite integer k , we have:[ d q,k ( m ∩ (cid:13) m , m ∩ (cid:13) m )] k = [ k q ∩ − q ∩ k k ] k = X A ⊆ Ω | q ∩ ( A ) − q ∩ ( A ) | k = X A ⊆ Ω | q ( A ) q ( A ) − q ( A ) q ( A ) | k = X A ⊆ Ω | q ( A ) | k | q ( A ) − q ( A ) | k , For any subset A ⊆ Ω, we have that 0 ≤ q ( A ) ≤ d q,k ( m ∩ (cid:13) m , m ∩ (cid:13) m )] k ≤ X A ⊆ Ω | q ( A ) − q ( A ) | k ≤ [ d q,k ( m , m )] k . By definition, this latter inequality means that distance d q,k is consistentwith rule ∩ (cid:13) .If k = ∞ , we have: d q, ∞ ( m ∩ (cid:13) m , m ∩ (cid:13) m ) = k q ∩ − q ∩ k ∞ = max A ⊆ Ω | q ∩ ( A ) − q ∩ ( A ) | = max A ⊆ Ω | q ( A ) q ( A ) − q ( A ) q ( A ) | = q ( B ) | q ( B ) − q ( B ) | , with B = arg max A ⊆ Ω { q ( A ) | q ( A ) − q ( A ) |} . It follows that d q, ∞ ( m ∩ (cid:13) m , m ∩ (cid:13) m ) ≤ | q ( B ) − q ( B ) |≤ max A ⊆ Ω | q ( A ) − q ( A ) |≤ d q, ∞ ( m , m ) . By definition, this latter inequality means that ∩ (cid:13) is 1-Lipschitz continuousw.r.t. d q, ∞ . 28 Proof of proposition 2
In this appendix, we give a proof that ∩ (cid:13) is 1-Lipschitz continuous w.r.t. the L ∞ norm based pl -distance d pl, ∞ . Proof.
Suppose m , m and m are three mass functions on Ω and pl , pl and pl are their respective plausibility vectors. Let us first prove an inter-mediate result in case m = m E is categorical. We can write d pl, ∞ (cid:0) m | E , m | E (cid:1) = max A ⊆ Ω | pl | E ( A ) − pl | E ( A ) | . (58)Using the fact that pl and b -distances coincide and lemma 1, we obtain d pl, ∞ (cid:0) m | E , m | E (cid:1) = max A ⊆ Ω | b | E ( A ) − b | E ( A ) | (59)= max A ⊆ Ω | b (( E \ A ) c ) − b (( E \ A ) c ) | (60) ≤ max A ⊆ Ω | b ( A ) − b ( A ) | . (61)So the consistency condition is verified when m is categorical. Now, letus examine the general case where m is not necessarily categorical. Let B denote the transfer matrix [42] allowing to obtain vector forms of implica-bility functions by right-handed dot product with the vector form of theircorresponding mass functions. We can write d pl, ∞ ( m ∩ (cid:13) m , m ∩ (cid:13) m ) = k pl ∩ − pl ∩ k ∞ (62)= k b ∩ − b ∩ k ∞ (63)= k B · ( m ∩ − m ∩ ) k ∞ (64)= k B · ( S − S ) · m k ∞ . (65)One can always decompose a mass function as a convex combination of cat-egorical ones: m = P E ⊆ Ω m ( E ) m E . We obtain d pl, ∞ ( m ∩ (cid:13) m , m ∩ (cid:13) m ) = (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) X E ⊆ Ω m ( E ) B · ( S − S ) · m E (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ∞ (66) ≤ X E ⊆ Ω m ( E ) k B · ( S − S ) · m E k ∞ , (67)29ith the last inequality following from the triangle inequality and absolutehomogeneity properties of the L ∞ norm.From our intermediate result, we know that for any E , k B · ( S − S ) · m E k ∞ = d pl, ∞ (cid:0) m | E , m | E (cid:1) ≤ d pl, ∞ ( m , m ). Consequently, we have d pl, ∞ ( m ∩ (cid:13) m , m ∩ (cid:13) m ) ≤ d pl, ∞ ( m , m ) X E ⊆ Ω m ( E ) (68) ≤ d pl, ∞ ( m , m ) . (69)By definition, this latter inequality means that ∩ (cid:13) is 1-Lipschitz continu-ous w.r.t. d pl, ∞ . E Proof of proposition 3
In this appendix, we give a proof that for 1 ≤ k ≤ ∞ , ∪ (cid:13) is 1-Lipschitzcontinuous w.r.t. the L k norm based pl -distance d pl,k . Proof.
Suppose m , m and m are three mass functions on Ω and pl , pl and pl are their respective plausibility vectors. For any positive finite integer k , we have:[ d pl,k ( m ∪ (cid:13) m , m ∪ (cid:13) m )] k = [ k pl ∪ − pl ∪ k k ] k = X A ⊆ Ω | pl ∪ ( A ) − pl ∪ ( A ) | k = X A ⊆ Ω | b ∪ ( A c ) − b ∪ ( A c ) | k = X A ⊆ Ω | b ( A c ) b ( A c ) − b ( A c ) b ( A c ) | k = X A ⊆ Ω | b ( A c ) | k | b ( A c ) − b ( A c ) | k . For any subset A ⊆ Ω, we have that 0 ≤ b ( A c ) ≤ d pl,k ( m ∪ (cid:13) m , m ∪ (cid:13) m )] k ≤ X A ⊆ Ω | b ( A c ) − b ( A c ) | k ≤ X A ⊆ Ω | pl ( A ) − pl ( A ) | k (70) ≤ [ d pl,k ( m , m )] k .
30y definition, this latter inequality means that distance d pl,k is consistentwith rule ∪ (cid:13) .If k = ∞ , we have: d pl, ∞ ( m ∪ (cid:13) m , m ∪ (cid:13) m ) = k pl ∪ − pl ∪ k ∞ , = max A ⊆ Ω | pl ∪ ( A ) − pl ∪ ( A ) | = max A ⊆ Ω | b ∪ ( A c ) − b ∪ ( A c ) | = max A ⊆ Ω | b ( A c ) b ( A c ) − b ( A c ) b ( A c ) | = b ( B c ) | b ( B c ) − b ( B c ) | , with B = arg max A ⊆ Ω { b ( A c ) | b ( A c ) − b ( A c ) |} . It follows that d pl, ∞ ( m ∪ (cid:13) m , m ∪ (cid:13) m ) ≤ | b ( B c ) − b ( B c ) |≤ | pl ( B ) − pl ( B ) |≤ max A ⊆ Ω | pl ( A ) − pl ( A ) |≤ d pl, ∞ ( m , m ) . By definition, this latter inequality means that ∪ (cid:13) is 1-Lipschitz continuousw.r.t. d pl, ∞ . F Proof of proposition 4
In this appendix, we give a proof that ∪ (cid:13) is 1-Lipschitz continuous w.r.t the L ∞ norm based q -distance d q, ∞ . Proof.
Suppose m , m and m are three mass functions on Ω and q , q and q are their respective commonality functions. Using relations (6) and (14),we can write d q, ∞ ( m ∪ , m ∪ ) = max A ⊆ Ω | q ∪ ( A ) − q ∪ ( A ) | (71)= max A ⊆ Ω | b ∪ ( A c ) − b ∪ ( A c ) | (72)= d b, ∞ ( m ∪ (cid:13) m , m ∪ (cid:13) m ) (73)= d pl, ∞ ( m ∪ (cid:13) m , m ∪ (cid:13) m ) (74)= d pl, ∞ ( m ∩ (cid:13) m , m ∩ (cid:13) m ) . (75)31ince d pl, ∞ is consistent w.r.t. ∩ (cid:13) , we obtain d q, ∞ ( m ∪ , m ∪ ) ≤ d pl, ∞ ( m , m ) (76)and d pl, ∞ ( m , m ) = d b, ∞ ( m , m ) (77)= max A ⊆ Ω | b ( A ) − b ( A ) | (78)= max A ⊆ Ω | q ( A c ) − q ( A c ) | (79)= d q, ∞ ( m , m ) . (80)By definition, this latter inequality means that ∪ (cid:13) is 1-Lipschitz continuousw.r.t. d q, ∞ . G Proof of proposition 5
In this appendix, we provide a proof that the extreme conflict values propertycannot be verified when a degree of conflict C is defined as C ( m , m ) = 1 − d ( m ∩ , m ∅ )where d is an L k norm based distance between either commonality functions,plausibility functions or specialization matrices when k is finite. Proof.
We obviously have C ( m , m ) = 1 iff m ∩ = m ∅ and mass functions m and m are maximally conflicting so the problem does not come from thisside of property (i).Now, suppose d = d q,k and k is finite. Since C ( m , m ) = 0 ⇔ d q,k ( m ∩ , m ∅ ) =1, we can write d q,k ( m ∩ , m ∅ ) = 1 (81) ⇔ ρ k q ∩ − q ∅ k k = 1 (82) ⇔ ρ k X A ⊆ Ω | q ∩ ( A ) − q ∅ ( A ) | k = 1 . (83)Remember that ρ k = N − L k norm based commonality distances. More-over, q ∅ ( A ) = 0 when A = ∅ and that q ( ∅ ) = 1 for any commonality function32herefore we obtain d q,k ( m ∩ , m ∅ ) = 1 (84) ⇔ X A ⊆ Ω A = ∅ | q ∩ ( A ) | k = N − . (85)Finally, the sum of commonalities (power k ) cannot be equal to N − q ∩ ( A ) = 1 for each A = ∅ . This means that q ∩ = q Ω which implies that q = q = q Ω . This latter condition is not an admissible definition of non-conflict. The proof for distances between plausibilities is extremely similarand is thus omitted.Concerning, distances between specialization matrices, the philosophy isalso similar but we provide a sketch of the proof. When dealing with an L k norm based specialization distance, we have ρ k = 2 ( N −
1) and thisdistance is achieved for the pair ( m Ω , m ∅ ). To see that m Ω is the only massfunction achieving maximal distance with m ∅ , one just needs to observe thatthe matrix entry S ∅ ( A, B ) = 1 if A = ∅ and S ∅ ( A, B ) = 0 otherwise. We canwrite [ d spe ,k ( m ∩ , m ∅ )] k = X E ⊆ Ω (cid:0)(cid:13)(cid:13) m ∩ | E − m ∅ (cid:13)(cid:13) k (cid:1) k . (86)The only way to maximize the above expression is to maximize each (cid:13)(cid:13) m ∩ | E − m ∅ (cid:13)(cid:13) k individually for E = ∅ . We know that (cid:13)(cid:13) m ∩ | E − m ∅ (cid:13)(cid:13) k ≤ m ∩ | E ( ∅ ) = 0 to achieve this maximal value. This is not possible unless m ∩ = m Ω . Again, m ∩ = m Ω implies that m = m = m Ω . H Proof of proposition 6
In this appendix, we provide a proof that the degree of conflict C q, ∞ definedas C q, ∞ ( m , m ) = 1 − d q, ∞ ( m ∩ , m ∅ )coincides with the degree of strong conflict K . Proof.
By definition of C q, ∞ , we can write C q, ∞ ( m , m ) = 1 − max A ⊆ Ω | q ∩ ( A ) − q ∅ ( A ) | . (87)33ince q ∅ ( A ) = 0 when A = ∅ and that q ( ∅ ) = 1 for any commonalityfunction, we obtain C q, ∞ ( m , m ) = 1 − max A ⊆ Ω A = ∅ | q ∩ ( A ) | . (88)Any commonality function is such that q ( A ) ≥ q ( A ′ ) if A ⊆ A ′ , which gives C q, ∞ ( m , m ) = 1 − max a ∈ Ω A = ∅ | q ∩ ( { a } ) | . (89)Finally commonality and plausibility functions coincide on singletons, hence C q, ∞ ( m , m ) = K ( m , m ). I Proof of proposition 7
In this appendix, we provide a proof that the degree of conflict C spe , ∞ definedas C spe , ∞ ( m , m ) = 1 − d spe , ∞ ( m ∩ , m ∅ )coincides with Dempster’s degree of conflict κ . Proof.
By definition, we have d spe , ∞ ( m ∩ , m ∅ ) = max A,B ⊆ Ω | S ∩ ( A, B ) − S ∅ ( A, B ) | (90)= max A ⊆ Ω (cid:13)(cid:13) m ∩ | A − m ∅ (cid:13)(cid:13) ∞ . (91)For any A ⊆ Ω, we have (cid:13)(cid:13) m ∩ | A − m ∅ (cid:13)(cid:13) ∞ = max − m ∩ | A ( ∅ ) ; max E ⊆ Ω E = ∅ m ∩ | A ( E ) . (92)For any E = ∅ , we have m ∩ | A ( E ) ≤ X E ′ ⊆ Ω E ′ = ∅ m ∩ | A ( E ′ ) (93) ≤ − m ∩ | A ( ∅ ) . (94)We deduce that (cid:13)(cid:13) m ∩ | A − m ∅ (cid:13)(cid:13) ∞ = 1 − m ∩ | A ( ∅ ). Finally, Dempster’sdegree of conflict can only grow as one performs a conjunctive combina-tion therefore max A ⊆ Ω − m ∩ | A ( ∅ ) = 1 − m ∩ ( ∅ ), hence C spe , ∞ ( m , m ) = κ ( m , m ). 34 Proof of proposition 8
In this appendix, we give a proof that for some degree of conflict C definedas C ( m , m ) = 1 − d ( m ∩ , m ∅ ) , where d is a mass function distance consistent with ∩ (cid:13) , then C satisfies prop-erty (iii) (from [11]) w.r.t. the Dempsterian partial order ⊑ d . Proof.
By definition of the Dempsterian partial order, m ⊑ d m ′ means thatthere exists a mass function m such that m = m ′ ∩ (cid:13) m . If d is consistentw.r.t. ∩ (cid:13) , then for any mass function m , we have d ( m ′ ∩ (cid:13) m ∩ (cid:13) m, m ∅ ∩ (cid:13) m ) ≤ d ( m ′ ∩ (cid:13) m , m ∅ ) (95) ⇔ d ( m ∩ (cid:13) m , m ∅ ) ≤ d ( m ′ ∩ (cid:13) m , m ∅ ) (96) ⇔ C ( m , m ) ≥ C ( m ′ , m ) . (97) K Consistency of distances with α -junctions In [26], two families of mass function distances are introduced. Each ofthem relies on a given type of evidential matrix and a matrix norm. Theevidential matrices in question are either an α -specialization matrix or an α -generalization matrix. These matrices are a generalization of the special-ization matrix and generalization matrix, in the sense that these two matricesare retrieved by setting α = 1. The definition of these more general matricesstems from a class of combination rules known as α -junctions [41].In short, α -junctions are linear combination rules that do not depend onthe order in which mass functions are combined. This axiomatic justificationof these properties is detailed in [41]. These rules also have a meta-data de-pendent interpretation. These meta-data characterize the truthfulness of thesources that induced the mass functions. A source is truthful if it conveys thepieces of information it possesses and it is untruthful if it conveys inconsis-tent pieces of information as compared to the ones it possesses. For example,suppose a source has inferred that { θ ∈ A } . If it is truthful it conveys themass function m A while it conveys m A c if it is untruthful. The α -junctionsallow to combine mass functions in several situations ranging between thesetwo extreme cases. This interpretation is documented in [34, 33, 26, 21].35oncerning evidential matrices, the most important point for the presentdiscussion is that for each α ∈ [0; 1], there is a bijective correspondence be-tween a given mass function m i and an α -specialization matrix D ( α ) i . Anotherbijective correspondence also exists between a given mass function m i andan α -generalization matrix G ( α ) i . The main result in [26] states that, for any α ∈ [0; 1], the distance induced by the L matrix norm of the difference be-tween a pair of α -specialization matrices is consistent with the α -conjunctiverule. Likewise, for any α ∈ [0; 1], the distance induced by the L matrix normof the difference between a pair of α -generalization matrices is consistent withthe α -disjunctive rule.Evidential matrices are not the only representation of states of beliefsinduced by α -junctions. One can also define α -commonality functions [41] q ( α ) i . Let Q ( α ) denote the matrix obtained by n Kronecker product as Q ( α ) = Kron (cid:18)(cid:20) α − (cid:21) , . . . , Kron (cid:18)(cid:20) α − (cid:21) , (cid:20) α − (cid:21)(cid:19)(cid:19) . (98)The vector form of function q ( α ) i is obtained as q ( α ) i = Q ( α ) · m i , (99)with m i the vector form of some mass function m i . There is a bijective corre-spondence between α -commonality functions and mass functions and the α -commonality function in correspondence with the result of an α -conjunctivecombination is equal to the entrywise product of the α -commonality functionsin correspondence with the combined mass functions. Using this property,the same reasoning as in the proof of proposition 1 applies and any L k normbased distance between α -commonality functions is consistent with the cor-responding α -conjunctive rule. For the proof to hold, one also needs that | q ( α ) i ( B ) | ≤ B ⊆ Ω. Looking at equation (98), we actually have α − ≤ q ( α ) i ( B ) ≤ , ∀ B ⊆ Ω . (100)Similarly, Smets [41] also introduces α -implicability functions b ( α ) i . Let B ( α ) denote the matrix obtained by n Kronecker product as B ( α ) = Kron (cid:18)(cid:20) α − (cid:21) , . . . , Kron (cid:18)(cid:20) α − (cid:21) , (cid:20) α − (cid:21)(cid:19)(cid:19) . (101)36he vector form of function b ( α ) i is obtained as b ( α ) i = B ( α ) · m i , (102)with m i the vector form of some mass function m i . There is a bijective cor-respondence between α -implicability functions and mass functions and the α -implicability function in correspondence with the result of an α -disjunctivecombination is equal to the entrywise product of the α -implicability functionsin correspondence with the combined mass functions. Using this property,the same reasoning as in the proof of proposition 3 applies and any L k normbased distance between α -implicability functions is consistent with the cor-responding α -disjunctive rule. For the proof to hold, one also needs that | b ( α ) i ( B ) | ≤ B ⊆ Ω. Again, from equation (101), we actually have α − ≤ b ( α ) i ( B ) ≤ , ∀ B ⊆ Ω . (103)Computational difficulties are found when n increases for computing α -commonality and α -implicability functions but this is also true for comput-ing α -specialization or α -generalization matrices, therefore all the distancesevoked in this appendix section are on an equal footing from both theoreticaland practical considerations. References [1] T. Burger. Geometric views on conflicting mass functions: From dis-tances to angles.
International Journal of Approximate Reasoning ,70:36–50, 2016.[2] T. Burger, S. Destercke, et al. How to randomly generate mass functions.
International Journal of Uncertainty, Fuzziness and Knowledge-BasedSystems , pages 645–673, 2013.[3] I. Couso and D. Dubois. Statistical reasoning with set-valued informa-tion: Ontic vs. epistemic views.
International Journal of ApproximateReasoning , 55(7):1502 – 1518, 2014. Special issue: Harnessing the infor-mation contained in low-quality data sources.[4] I. Couso, D. Dubois, and L. S´anchez. Random sets and random fuzzysets as ill-perceived random variables. In
SpringerBriefs in Computa-tional Intelligence . Springer, 2014.375] F. Cuzzolin. A geometric approach to the theory of evidence.
Sys-tems, Man, and Cybernetics, Part C: Applications and Reviews, IEEETransactions on , 38(4):522–534, July 2008.[6] F. Cuzzolin. Geometric conditioning of belief functions. In
Proceedings ofBELIEF 2010, International workshop on the theory of belief functions ,pages 1–6, Brest, France, 2010.[7] F. Cuzzolin.
Visions of a generalized probability theory . Lambert Aca-demic Publishing, London, UK, 2014.[8] A. P. Dempster. Upper and lower probabilities induced by a multiplevalued mapping.
Annals of Mathematical Satistics , 38(2):325–339, 1967.[9] T. Denux. Conjunctive and disjunctive combination of belief functionsinduced by non-distinct bodies of evidence.
Artificial Intelligence , 172(2-3):234–264, february 2008.[10] T. Denux. Likelihood-based belief function: Justification and some ex-tensions to low-quality data.
International Journal of Approximate Rea-soning , 55(7):1535 – 1547, 2014. Special issue: Harnessing the informa-tion contained in low-quality data sources.[11] S. Destercke and T. Burger. Toward an axiomatic definition of conflictbetween belief functions.
IEEE transactions on cybernetics , 43(2):585–596, 2013.[12] J. Diaz, M. Rifqi, and B. Bouchon-Meunier. A similarity measure be-tween basic belief assignments. In
Int. Conf. on Information Fusion(FUSION’06) , pages 1–6, Florence (Italy), july 2006.[13] D. Dubois, W. Liu, J. Ma, and H. Prade. The basic principles of uncer-tain information fusion. an organised review of merging rules in differentrepresentation frameworks.
Information Fusion , 32:12–39, 2016.[14] D. Dubois and H. Prade. A set-theoretic view of belief functions: logicaloperations and approximatons by fuzzy sets.
Int. Journal of GeneralSystems , 12(3):193–226, 1986.[15] M. Fr´echet.
Sur quelques points du calcul fonctionnel . PhD thesis, Fac-ult´e des Sciences de Paris, 1906.3816] P. Jaccard. Distribution de la flore alpine dans le bassin des dranseset dans quelques r´egions voisines.
Bulletin de la Soci´et´e Vaudoise desSciences Naturelles , 37:241–272, 1901.[17] A.-L. Jousselme, D. Grenier, and E. Boss´e. A new distance between twobodies of evidence.
Information Fusion , 2:91–101, 2001.[18] A.-L. Jousselme and P. Maupin. Distances in evidence theory: Com-prehensive survey and generalizations.
International Journal of Approx-imate Reasoning , 53(2):118 – 145, 2012. Theory of Belief Functions(BELIEF 2010).[19] R. Kennes and P. Smets. Computational aspects of the mobius trans-form. arXiv preprint arXiv:1304.1122 , 2013.[20] J. Klein, S. Destercke, and O. Colot. Interpreting evidential distancesby connecting them to partial orders: Application to belief functionapproximation.
International Journal of Approximate Reasoning , 71:15– 33, 2016.[21] J. Klein, M. Loudahi, J.-M. Vannobel, and O. Colot. α -junctions ofcategorical mass functions. In F. Cuzzolin, editor, Belief Functions:Theory and Applications , volume 8764 of
Lecture Notes in ComputerScience , pages 1–10. Springer International Publishing, 2014.[22] M.-J. Lesot, M. Rifqi, and H. Benhadda. Similarity measures for bi-nary and numerical data: a survey.
International Journal of KnowledgeEngineering and Soft Data Paradigms , 1(1):63–84, 2008.[23] W. Liu. Analyzing the degree of conflict among belief functions.
Artifi-cial Intelligence , 170(11):909 – 924, 2006.[24] M. Loudahi, J. Klein, J.-M. Vannobel, and O. Colot. Fast computationof l p norm-based specialization distances between bodies of evidence. InF. Cuzzolin, editor, Belief Functions: Theory and Applications , volume8764 of
Lecture Notes in Computer Science , pages 422–431. SpringerInternational Publishing, 2014.[25] M. Loudahi, J. Klein, J.-M. Vannobel, and O. Colot. New distances be-tween bodies of evidence based on dempsterian specialization matrices39nd their consistency with the conjunctive combination rule.
Interna-tional Journal of Approximate Reasoning , 55(5):1093 – 1112, 2014.[26] M. Loudahi, J. Klein, J.-M. Vannobel, and O. Colot. Evidential matrixmetrics as distances between meta-data dependent bodies of evidence.
Cybernetics, IEEE Transactions on , 46(1):109–122, Jan 2016.[27] A. Martin, A.-L. Jouselme, and C. Osswald. Conflict measure for thediscounting operation on belief functions. In
Proceedings of the EleventhInternational Conference on Information Fusion , pages 1–8, 2008.[28] G. Math´eron.
Random sets and integral geometry . Wiley New York,1975.[29] E. Miranda, I. Couso, and P. Gil. Random sets as imprecise random vari-ables.
Journal of Mathematical Analysis and Applications , 307(1):32–47,2005.[30] I. S. Molchanov.
Theory of random sets , volume 19. Springer, 2005.[31] H. T. Nguyen. An introduction to random sets. 2006.
Chap-man&Hall/CRC, Boca Raton, FL .[32] H. T. Nguyen. On random sets and belief functions.
Journal of Mathe-matical Analysis and Applications , 65(3):531 – 542, 1978.[33] F. Pichon. On the α -conjunctions for combining belief functions. InT. Denoeux and M.-H. Masson, editors, Belief Functions: Theory andApplications , volume 164 of
Advances in Intelligent and Soft Computing ,pages 285–292. Springer Berlin Heidelberg, 2012.[34] F. Pichon and T. Denoeux. Interpretation and computation of α -junctions for combining belief functions. In , Durham,U.K., 2009.[35] F. Pichon, A.-L. Jousselme, and N. B. Abdallah. Several shades ofconflict. Fuzzy Sets and Systems , 366:63 – 84, 2019.[36] B. Schmelzer. Characterizing joint distributions of random sets by mul-tivariate capacities.
International Journal of Approximate Reasoning ,53(8):1228–1247, 2012. 4037] B. Schmelzer. Joint distributions of random sets and their relation tocopulas.
International Journal of Approximate Reasoning , 65:59–69,2015.[38] B. Schmelzer. Sklar’s theorem for minitive belief functions.
InternationalJournal of Approximate Reasoning , 63:48–61, 2015.[39] B. Schmelzer. Multivariate capacity functionals vs. capacity functionalson product spaces.
Fuzzy Sets and Systems , 364:1–35, 2019.[40] G. Shafer.
A Mathematical Theory of Evidence . Princeton Universitypress, Princeton (NJ), USA, 1976.[41] P. Smets. The alpha-junctions: Combination operators applicable to be-lief functions. In
First Int. Joint Conference on Qualitative and Quan-titative Practical Reasoning (ECSQUARU-FAPR’97) , volume 1244 of
Lecture Notes in Computer Sciences , pages 131–153. Springer Interna-tional Publishing, Bad Honef, Germany, 1997.[42] P. Smets. The application of the matrix calculus to belief functions.
International Journal of Approximate Reasoning , 31(12):1 – 30, 2002.[43] P. Smets. Belief functions on real numbers.
International Journal ofApproximate Reasoning , 40:181–223, 2005.[44] P. Smets and R. Kennes. The transferable belief model.
Artificial Intel-ligence , 66(2):191–234, 1994.[45] B. Tessem. Approximations for efficient computation in the theory ofevidence.
Artificial Intelligence , 61(2):315 – 329, 1993.[46] R. R. Yager. The entailment principle for dempstershafer granules.
In-ternational Journal of Intelligent Systems , 1(4):247–262, 1986.[47] L. M. Zouhal and T. Denœux. An evidence-theoretic k-nn rule withparameter optimisation.