Information Leakage in Zero-Error Source Coding: A Graph-Theoretic Perspective
Yucheng Liu, Lawrence Ong, Sarah Johnson, Joerg Kliewer, Parastoo Sadeghi, Phee Lep Yeoh
IInformation Leakage in Zero-Error Source Coding:A Graph-Theoretic Perspective
Yucheng Liu † , Lawrence Ong † , Sarah Johnson † , Joerg Kliewer ∗ , Parastoo Sadeghi ‡ , and Phee Lep Yeoh §† The University of Newcastle, Australia (emails: { yucheng.liu, lawrence.ong, sarah.johnson } @newcastle.edu.au) ∗ New Jersey Institute of Technology, USA (email: [email protected]) ‡ University of New South Wales, Australia (email: [email protected]) § University of Sydney, Australia (email: [email protected])
Abstract —We study the information leakage to a guessing ad-versary in zero-error source coding. The source coding problemis defined by a confusion graph capturing the distinguishabilitybetween source symbols. The information leakage is measuredby the ratio of the adversary’s successful guessing probabilityafter and before eavesdropping the codeword, maximized overall possible source distributions. Such measurement under thebasic adversarial model where the adversary makes a singleguess and allows no distortion between its estimator and thetrue sequence is known as the maximum min-entropy leakage orthe maximal leakage in the literature. We develop a single-lettercharacterization of the optimal normalized leakage under thebasic adversarial model, together with an optimum-achievingscalar stochastic mapping scheme. An interesting observationis that the optimal normalized leakage is equal to the optimalcompression rate with fixed-length source codes, both of whichcan be simultaneously achieved by some deterministic codingschemes. We then extend the leakage measurement to gener-alized adversarial models where the adversary makes multipleguesses and allows certain level of distortion, for which wederive single-letter lower and upper bounds.
I. I
NTRODUCTION
We study the fundamental limits of information leakage inzero-error source coding from a graph-theoretic perspective.Source coding [1] considers compression of an informationsource to represent data with fewer number of bits bymapping multiple source sequences to the same codeword.Suppose we observe a source X and wish to transmit acompressed version of the source to a legitimate receiver.From the receiver’s perspective, some source symbols areto be distinguished and some are not. We say two sourcesymbols/sequences are distinguishable if they are to bedistinguished by the receiver. For successful decoding, anydistinguishable source sequences must not be mapped tothe same codeword. The distinguishability relationship ischaracterized by the confusion graph Γ for the source. Suchgraph-theoretic model has various applications in the realworld. Consider the toy example in Figure 1(a), where X denotes the water level of a reservoir, and a supervisor onlyneeds to know whether the water level is relatively high orlow to determine whether a refilling is needed. This work was supported by the ARC Discovery Scheme DP190100770;the US National Science Foundation Grant CNS-1815322; and the ARCFuture Fellowship FT190100429.
Source X : HLVLVH (a) codeword b l u e x = VH or H? r e d x = VL or L? (b)Figure 1. (a) From the supervisor’s perspective, symbols VH (very high)and H (high) are indistinguishable (i.e., need not to be distinguished), and soare symbols VL (very low) and L (low). We draw an edge between any twodistinguishable symbols, and then to satisfy the supervisor, we can only mapnon-adjacent symbols to the same codeword. (b) An adversary eavesdropsthe codeword, based upon which it guesses the exact water level. The source coding model we consider was originally intro-duced by K¨orner [2], where a vanishing error probability isallowed and the resulted optimal compression rate is definedas the graph entropy of the confusion graph. More recently,Wang and Shayevitz [3] analyzed the joint source-channelcoding problem based on the same zero-error graph-theoreticsetting as our model for the source coding.Suppose that the transmitted codeword is eavesdropped bya guessing adversary, who knows the source distribution P X and tries to guess the true source sequence via maximumlikelihood estimation within a certain number of trials. SeeFigure 1(b) for an example. Before observing the codeword,the adversary will guess the most likely water level amongall four levels. After observing the codeword, say “blue”, itwill guess the more likely water level between VH and H .Compared with guessing blindly (i.e., based only on P X ),the average successful guessing probability will increase asthe adversary eavesdrops the codeword. We measure theinformation leakage from the codeword to the adversary bysuch a probability increase. More specifically, the leakage isquantified as the ratio between the adversary’s probability ofsuccessful guessing after and before observing the codeword.This way of measuring information leakage was originallyintroduced by Smith [4], leading to the leakage metriccommonly referred to as min-entropy leakage.Quite often in practice, the compression scheme is de-signed without knowing the exact source distribution. Insuch case, one can consider the worst-case leakage, which isthe information leakage maximized over all possible source a r X i v : . [ c s . I T ] F e b istributions P X over the fixed alphabet X . The worst-casevariant of min-entropy leakage, namely the maximum min-entropy leakage, was developed by Braun et al. [5].A similar idea was independently explored by Issa et al. [6], [7] in a different setup where the adversary is interestedin guessing some randomized function U of X rather than X itself. The worst-case metric under such scenario is namedas the maximal leakage . Interestingly, despite their differentoperational meanings, the maximal leakage and maximummin-entropy leakage turn out to be equal. For more worksstudying the maximal leakage or the maximum min-entropyleakage and their variants from both the information-theoreticand computer science perspectives, see [8]–[17]. In anotherrelated work by Shkel and Poor [18], leakage in compres-sion systems has been studied considering multiple leakagemetrics, including the maximal leakage, under the assumptionthat the source code must be deterministic yet a random secretkey is shared between the sender and the receiver.Clearly we wish to keep the information leakage as smallas possible by smartly designing a (possibly stochastic)source coding scheme. Therefore, our fundamental objectiveis to characterize the minimum leakage (normalized to thesource sequence length) under the zero-error decoding re-quirement and also the optimum-achieving mapping scheme. Contributions and organization:
In Section II, we detailthe problem of information leakage in source coding. Inparticular, we start with the basic adversarial model where theadversary makes a single guess and allows no distortion , andthus the resulting privacy metric is the normalized version of the maximal leakage [7] or the normalized maximum min-entropy leakage [5]. Our main contributions are as follows:1) In Section III, we develop a single-letter characterizationfor the optimal normalized maximal leakage for the basicadversarial model. We also design a scalar stochastic mappingscheme that achieves this optimum. An interesting observa-tion is that the optimal leakage can also be achieved usingdeterministic codes that simultaneously achieve the optimalfixed-length zero-error compression rate.2) In Section IV, we extend our adversarial model to allowmultiple guesses and distortion between an estimator (guess)and the true sequence. Inspired by the notion of confusiongraphs, we characterize the relationship between a sequenceand its acceptable estimators by another graph defined on thesource alphabet, resulting in a novel leakage measurement.3) We then show that the optimal normalized leakage underthe generalized models is always upper-bounded by the resultin the original setup. Single-letter lower bounds (i.e., converseresults) are also established.We also include a brief review on basic graph-theoreticdefinitions in Appendix A. Notation:
For non-negative integers a and b , [ a ] denotes theset { , , . . . , a } , and [ a : b ] denotes the set { a, a + 1 , . . . , b } .If a > b , [ a : b ] = ∅ . For a finite set A , | A | denotes itscardinality. For two sets A and B , A × B denotes their When no distortion is allowed, the adversary must guess the actual sourcesequence to be considered successful. The normalized version is appropriate as we compress a source sequence.
Cartesian product. For a sequence of sets A , A , . . . , A t ,we may simply use (cid:81) j ∈ [ t ] A j to denote their Cartesianproduct. For any discrete random variable Z with probabilitydistribution P Z , we denote its alphabet by Z with realizations z ∈ Z . For any K ⊆ Z , P Z ( K ) . = (cid:80) z ∈ K P Z ( z ) .II. S YSTEM M ODEL AND P ROBLEM F ORMULATION
Source coding with confusion graph Γ : Consider adiscrete memoryless stationary information source X thattakes values in the alphabet X with full support. Wewish to stochastically compress a source sequence X t . =( X , X , . . . , X t ) to some codeword Y that takes values inthe code alphabet Y and transmit it to a legitimate receiver viaa noiseless channel. The randomized mapping scheme from X to Y is denoted by the conditional distribution P Y | X t .To the receiver, the distinguishability relationship amongsource symbols is characterized by a confusion graph Γ ,where the vertex set is the source alphabet, i.e., V (Γ) = X ,and any two symbols x, x (cid:48) ∈ X are adjacent in Γ , i.e., { x, x (cid:48) } ∈ E (Γ) , iff they are distinguishable with each other.Any two source sequences, x t = ( x , . . . , x t ) ∈ X t and v t =( v , . . . , v t ) ∈ X t , are distinguishable iff at some j ∈ [ t ] , x j and v j are distinguishable. Therefore, the distinguishabilityamong source sequences of length t is characterized by theconfusion graph Γ t , which is defined as the t -th power of Γ with respect to the OR (disjunctive) graph product [19,Section 3.4]: Γ t = Γ ∨ Γ ∨ · · · ∨ Γ = Γ ∨ t .To ensure zero-error decoding, any two source sequencesthat can be potentially mapped to the same codeword mustnot be distinguishable. More formally, given some P Y | X t , let X tP Y | Xt ( y ) . = { x t ∈ X t : P Y | X t ( y | x t ) > } (1)denote the set of all x t mapped to y with nonzero probability.When there is no ambiguity, we simply denote X tP Y | Xt ( y ) by X t ( y ) . Therefore, a mapping scheme P Y | X t is valid iff X t ( y ) ∈ I (Γ t ) , ∀ y ∈ Y , (2)where I ( · ) denotes the set of independent sets of a graph (cf.Appendix A). Leakage to a guessing adversary:
As a starting point,we assume that the adversary makes a single guess afterobserving each codeword and allows no distortion betweenits estimator sequence and the true source sequence.Consider any source coding problem Γ . The maximalleakage for a given sequence length t and a given validmapping P Y | X t is defined as follows: L t ( P Y | X t ) . = log max P X E Y (cid:20) max x t ∈X t P X t | Y ( x t | Y ) (cid:21) max x t ∈X t P X t ( x t ) (3) When there is no ambiguity, instead of saying a zero-error source codingproblem with confusion graph Γ , we just say a source coding problem Γ . Note that we have adopted the name of maximal leakage [7], which isequivalent to the maximum min-entropy leakage [5]. For notation brevity, we drop the reference to Γ noting that all leakagemeasures defined in this paper are dependent on Γ . log max P X (cid:80) y ∈Y max x t ∈X t P X t ,Y ( x t , y )max x t ∈X t P X t ( x t ) (4) = log (cid:88) y ∈Y max x t ∈X t P Y | X t ( y | x t ) , (5)where (5) follows from [5, Proposition 5.1]. The optimal maximal leakage for a given t is then defined as L t . = inf P Y | Xt : X t ( y ) ∈I (Γ t ) , ∀ y ∈Y L t ( P Y | X t ) , (6)based upon which we can define the (optimal) maximalleakage rate as L . = lim t →∞ t − L t . (7)III. M AXIMAL L EAKAGE R ATE : C
HARACTERIZATION
In the following we present a single-letter characterizationof the maximal leakage rate L . Theorem 1:
For any source coding problem Γ , L = log χ f (Γ) , (8)where χ ( · ) denotes the fractional chromatic number of agraph (cf. Definition 4).To prove Theorem 1, we introduce several useful lemmas.We first show that given any mapping scheme, “merging”any two codewords does not increase the leakage (as long asthe generated mapping is still valid).More precisely, consider any sequence length t and anyvalid mapping P Y | X t such that there exists some mergeable codewords y , y ∈ Y , y (cid:54) = y , satisfying X t ( y ) ∪X t ( y ) ⊆ T for some T ∈ I max (Γ t ) , where I max ( · ) denotes the setof maximal independent sets of a graph (cf. Appendix A).Construct P Y , | X t by merging y and y to a new codeword y , / ∈ Y . That is, Y , = ( Y \ { y , y } ) ∪ { y , } , and forany x t ∈ X t , P Y , | X t ( y | x t )= (cid:40) P Y | X t ( y | x t ) + P Y | X t ( y | x t ) , if y = y , ,P Y | X t ( y | x t ) , otherwise . (9)Then we have the following result. Lemma 1: L t ( P Y , | X t ) ≤ L t ( P Y | X t ) . Proof:
It suffices to show max x t ∈X t P Y , | X t ( y , | x t ) being no larger than (cid:80) y ∈{ y ,y } max x t ∈X t P Y | X t ( y | x t ) as max x t ∈X t P Y , | X t ( y , | x t )= max x t ∈X t ( P Y | X t ( y | x t ) + P Y | X t ( y | x t )) ≤ max x t ∈X t P Y | X t ( y | x t ) + max x t ∈X t P Y | X t ( y | x t )= (cid:88) y ∈{ y ,y } max x t ∈X t P Y | X t ( y | x t ) , which completes the proof of the lemma.As specified in (2), for a valid mapping scheme, everycodeword y should correspond to an independent set ofthe confusion graph Γ . As a consequence of Lemma 1, to characterize the optimal leakage, it suffices to consideronly those mapping schemes for which all codewords y correspond to distinct maximal independent sets of Γ .To formalize this observation, for any sequence length t ,define the distortion function d t : X t × I max (Γ t ) → { , } such that for any x t ∈ X t , T ∈ I max (Γ t ) , d ( x t , y ) = (cid:40) , x t ∈ T, , x t / ∈ T. (10)Then the lemma below holds, whose proof is presented inAppendix B. Lemma 2:
To characterize L t defined in (6), it suffices toassume the mapping P Y | X t to satisfy that Y = I max (Γ t ) and d ( X t , Y ) = 0 almost surely. Thus by (5), we have L t = inf P Y | Xt : Y = I max (Γ t ) ,d ( X t ,Y )=0 log (cid:88) y ∈Y max x t ∈X t P Y | X t ( y | x t ) . (11)The solution to the optimization problem on the right handside of (11) in Lemma 2 is characterized by [20, Corollary1], based upon which we have the following result. Lemma 3: L t = − log η, where η is the solution to thefollowing maximin problem:maximize min x t ∈X t (cid:88) T ∈I max (Γ t ): x t ∈ T κ T , (12a)subject to (cid:88) T ∈I max (Γ t ) κ T = 1 , (12b) κ T ∈ [0 , , ∀ T ∈ I max (Γ t ) . (12c)On the other hand, for any t , χ f (Γ t ) is the solution to thefollowing linear program [21, Section 2.2]:minimize (cid:88) T ∈I max (Γ t ) λ T , (13a)subject to (cid:88) T ∈I max (Γ t ): x t ∈ T λ T ≥ , ∀ x t ∈ X t , (13b) λ T ∈ [0 , , ∀ T ∈ I max (Γ t ) . (13c)We can show that the solutions to the optimization prob-lems (12) and (13) are reciprocal to each other. That is, η = 1 χ f (Γ t ) , (14)whose proof is presented in Appendix C. The remaining proofof Theorem 1 follows easily from the above results. Proof of Theorem 1:
We have L ( a ) = lim t →∞ t ( − log η ) ( b ) = lim t →∞ t log χ f (Γ t ) ( c ) = log χ f (Γ) . where (a) follows from (7) and Lemma 3, (b) follows (14),and (c) follows from the fact that χ f (Γ t ) = χ f (Γ ∨ t ) = χ f (Γ) t (cf. [19, Corollary 3.4.2]).Having characterized the optimal maximal leakage rate L in Theorem 1, in the following we design an optimal mapping By solution we mean the optimal objective value of the problem. cheme P Y | X t for some t that achieves L , which is based onthe optimal fractional coloring of the confusion graph Γ .Fix the sequence length t = 1 . For Γ = Γ , therealways exists some b -fold coloring P = { T , T , . . . , T m } for some finite positive integer b such that χ f (Γ) = m/b (cf.Definitions 3 and 4; see also [19, Corollary 1.3.2 and Section3.1]).Set Y = P (and thus every codeword y ∈ Y is actually anindependent set of Γ ). Set P Y | X ( y | x ) = (cid:40) /b, if x ∈ y, , otherwise. (15)As every x ∈ X is in exactly b sets within P , we have (cid:88) y ∈Y P Y | X ( y | x ) = (cid:88) y ∈Y : x ∈ y b + (cid:88) y ∈Y : x/ ∈ y , ∀ x ∈ X , and thus P Y | X is a valid mapping scheme. We have L ( P Y | X ) = log (cid:88) y ∈Y max x ∈X P Y | X ( y | x )= log (cid:88) y ∈P b = log mb = log χ f (Γ) , (16)and thus we know that the maximal leakage rate in Theorem 1is indeed achievable by the mapping described in (15). Remark 1:
Consider any source coding problem Γ . Weknow that the optimal zero-error compression rate (withfixed-length deterministic source codes) is R = lim t →∞ t log χ (Γ ∨ t ) = lim t →∞ t log χ f (Γ ∨ t ) = log χ f (Γ) , where the second equality follows from [19, Corollary 3.4.3].We can verify that the above result holds even when weallow stochastic mapping. Hence, the maximal leakage rate L always equals to the optimal compression rate R . Moreover,it can be verified that any R -achieving deterministic code cansimultaneously achieve L . In other words, when consideringfixed-length source coding, there is no trade-off betweenthe compression rate and the leakage rate. Furthermore, weobserve the following:1) Our characterization of L holds generally and does notrely on the assumption of fixed-length coding;2) While in general, the optimal zero-error compressionrate R and the maximal leakage rate L can be simul-taneously and asymptotically attained at the limit ofincreasing t , we showed in (16) that L , on the otherhand, can be achieved exactly even with t = 1 (usingthe symbol-by-symbol encoding scheme specified in(15) based on the factional coloring of Γ ), but possiblyat the expense of the compression rate.3) For variable-length source coding, whether there is acompression-leakage trade-off remains unclear.IV. E XTENSIONS ON THE M AXIMAL L EAKAGE R ATE :M ULTIPLE AND A PPROXIMATE G UESSES
In general, the adversary may be able to make multipleguesses. For example, the adversary may possess a testing mechanism to verify whether its guess is correct or not, andthus can perform a trial and error attack until it is stopped bythe system. Also, for each true source sequence, there maybe multiple estimators other than the true sequence itself thatare “close enough” and thus can be regarded successful.We generalize our definition of information leakage tocater to the above scenarios. Consider any source codingproblem Γ , sequence length t , and valid mapping P Y | X t .Suppose the adversary generates a set of guesses K ⊆ X t .For each set K , define a “covering” set K + , where K ⊆ K + ⊆ X t , such that if the true sequence is in K + , then theadversary’s guess list K is considered successful. Let S . = { K + : K is a guess list the adversary can choose } be the collection of all possible K + . Then for the blind guess-ing, the successful probability is max S ∈S (cid:80) x t ∈ S P X t ( x t ) , and for guessing after observing Y , the average success-ful probability is E Y (cid:2) max S ∈S (cid:80) x t ∈ S P X t | Y ( x t | Y ) (cid:3) . In thesame spirit of maximal leakage, we can define ρ t ( P Y | X t , S ) . = log max P X E Y (cid:20) max S ∈S (cid:80) x t ∈ S P X t | Y ( x t | Y ) (cid:21) max S ∈S (cid:80) x t ∈ S P X t ( x t ) as the ratio between the a posteriori and a priori successfulguessing probability. If we set S singleton = {{ x t } : x t ∈ X t } ,that is, the adversary is allowed one guess and it must guessthe correct source sequence precisely, the maximal leakagedefined in (3) can be equivalently written as L t ( P Y | X t ) = ρ t ( P Y | X t , S singleton ) . In the next three subsections, we study the informationleakage rate under different adversarial models.
A. Leakage for the Case of Multiple Guesses
In this subsection, we consider the case where the adver-sary make multiple guesses, yet does not allow distortionbetween its estimators and the true sequence.We characterize the number of guesses the adversarycan make by a guessing capability function g ( t ) , where t ∈ Z + is the sequence length. We assume g ( t ) to bepositive, integer-valued, non-decreasing, and upper-bounded by α (Γ t ) = α (Γ ∨ t ) = α (Γ) t , where α ( · ) denotes theindependence number of a graph (cf. Appendix A).Consider any source coding problem Γ and any guessing-capability function g . For a given sequence length t and agiven valid mapping P Y | X t , the maximal leakage naturallyextends to the multi-guess maximal leakage, defined as L tg ( P Y | X t ) . = ρ t ( P Y | X t , S g ) , (17)where S g = { K ⊆ X t : | K | = g ( t ) } . Then we can definethe (optimal) multi-guess maximal leakage rate as L g . = lim t →∞ t inf P Y | Xt : X t ( y ) ∈I (Γ t ) , ∀ y ∈Y L tg ( P Y | X t ) . (18) Suppose for some t we have g ( t ) ≥ α (Γ t ) . Then upon observing anycodeword y , the adversary can always determine the true source value byexhaustively guessing all possible x t ∈ X t ( y ) as |X t ( y ) | ≤ α (Γ t ) . e first show that the multi-guess maximal leakage rateis always no larger than the maximal leakage. Lemma 4:
For any source coding problem Γ and guessingcapability function g , we have L g ≤ L . Proof:
It suffices to show L tg ( P Y | X t ) ≤ L t ( P Y | X t ) forany t and P Y | X t . For any P X , we have (cid:80) y ∈Y max K ⊆X t : | K | = g ( t ) (cid:80) x t ∈ K P X t ,Y ( x t , y )max K ⊆X t : | K | = g ( t ) (cid:80) x t ∈ K P X t ( x t ) ≤ (cid:80) y ∈Y (cid:0) max K ⊆X t : | K | = g ( t ) (cid:80) x t ∈ K P X t ( x t ) (cid:1)(cid:0) max ˜ x t ∈X t P Y | X t ( y | ˜ x t ) (cid:1) max K ⊆X t : | K | = g ( t ) (cid:80) x t ∈ K P X t ( x t )= (cid:88) y ∈Y max x t ∈X t P Y | X t ( y | x t ) , which implies that L tg ( P Y | X t ) ≤ L t ( P Y | X t ) .We have the following single-letter lower and upperbounds on L g , whose proof is presented in Appendix D. Theorem 2:
We have log | V (Γ) | − log α (Γ) ≤ L g ≤ log χ f (Γ) . (19)When the adversary guesses some randomized function U of X rather than X itself, the maximal leakage equals toits multi-guess extension [7]. It remains to be investigatedwhether a similar equivalence holds generally in our setup.In the following we recognize one special case where indeed L g = L and consequently, by Theorem 1, L g = log χ f (Γ) . Proposition 1:
Consider any source coding problem Γ . If lim t →∞ t log g ( t ) = 0 , then L g = L = log χ f (Γ) .The proof of the above proposition is relegated to Ap-pendix E. Intuitively, the above result suggests that whenthe number of guesses the adversary can make does notgrow “fast enough” with respect to t , it makes no differencewhether the adversary is making one guess or multipleguesses (in terms of the leakage defined in (7) and (18)).As a direct corollary of Theorem 2, the result below showsthat L g = L = log χ f (Γ) holds for another specific scenario. Corollary 1: If Γ is vertex-transitive [19, Section 1.3],then L g = L = log χ f (Γ) for any function g . Proof:
Since Γ is vertex-transitive, by [19, Proposition3.1.1], we have χ f (Γ) = | V (Γ) | /α (Γ) , which indicates thatlower and upper bounds in Theorem 2 match with each other,thus establishing L g = log χ f (Γ) = L . B. Leakage for the Case of One Approximate Guess
Suppose that the adversary makes only one guess yetallows a certain level of distortion between its estimator andthe true source value. That is, the guess is regarded successfulas long as the estimator is an acceptable approximation tothe true value. Inspired by the notion of confusion graph Γ that characterizes the distinguishability within the sourcesymbols, we introduce another graph to characterize the While the definition in [19, Section 1.3] is for a hypergraph , it can bereadily specialized to a graph since any graph is a special hypergraph, whoseevery hyperedge is a -element set. approximation relationship among source symbols (from theadversary’s perspective). We call this graph the adversary’sapproximation graph, or simply the approximation graph,denoted by Θ . The vertex set of Θ is just the source alphabet,i.e., V (Θ) = X , and any two source symbols x (cid:54) = x (cid:48) ∈ X areacceptable approximations to each other iff they are adjacentin Θ , i.e., { x, x (cid:48) } ∈ E (Θ) .Given a sequence length t , any two sequences x t =( x , . . . , x t ) and v t = ( v , . . . , v t ) are acceptable approxima-tions to each other iff for every j ∈ [ t ] , x j = v j or { x j , v j } ∈E (Θ) . Hence the approximation graph Θ t for sequence length t is the t -th power of Θ with respect to the AND graphproduct [22, Section 5.2]: Θ t = Θ (cid:2) Θ (cid:2) · · · (cid:2) Θ = Θ (cid:2) t .For any vertex x t ∈ X t , let N (Θ t , x t ) denote the neigh-borhood of x t within Θ t , including the vertex x t itself. Thatis, N (Θ t , x t ) = { v t ∈ X t : v t = x t or { v t , x t } ∈ E (Θ t ) } . Consider any source coding problem Γ and any approxi-mation graph Θ . For a given sequence length t and a givenvalid mapping P Y | X t , the maximal leakage naturally extendsto the approximate-guess maximal leakage, defined as L t Θ ( P Y | X t ) . = ρ t ( P Y | X t , S Θ ) , (20)where S Θ = { N (Θ t , x t ) : x t ∈ X t } . Then we can definethe (optimal) approximate-guess maximal leakage rate as L Θ . = lim t →∞ t inf P Y | Xt : X t ( y ) ∈I (Γ t ) , ∀ y ∈Y L t Θ ( P Y | X t ) . (21)The approximate-guess maximal leakage is always nolarger than the maximal leakage as specified in lemma below,whose proof is similar to that of Lemma 4 and thus omitted. Lemma 5:
For any source coding problem Γ and approxi-mation graph Θ , we have L Θ ≤ L .Before presenting single-letter bounds on L Θ , we introducethe following graph-theoretic notions.Consider any source coding problem Γ , approximationgraph Θ , and sequence length t . For any maximal indepen-dent set T ∈ I max (Γ t ) , we define its associated hypergraph (see Appendix A for basic definitions about hypergraphs). Definition 1 (Associated Hypergraph):
Consider any se-quence length t . For any T ∈ I max (Γ t ) , its associated hyper-graph H t ( T ) is defined as V ( H t ( T )) = T and E ( H t ( T )) = { E ⊆ T : E (cid:54) = ∅ , E = T ∩ N (Θ t , x t ) for some x t ∈ X t } .The following single-letter lower and upper bounds on L Θ hold, whose proof is presented in Appendix F. Theorem 3:
We have log p f (Θ)max T ∈I max (Γ) k f ( H ( T )) ≤ L Θ ≤ log χ f (Γ) , (22)where p f ( · ) denotes the fractional closed neighborhood pack-ing number [19, Section 7.4] of a graph and k f ( · ) denotes thefractional covering number (cf. Definition 8) of a hypergraph. Remark 2:
While the lower bound in Theorem 3 takes both Γ and Θ into account, the upper bound solely depends on Γ . This is referred to as the closed neighborhood of x t in Θ x t in [19], incontrast to the open neighborhood of x t which does not include x t itself. Note that, for brevity, the dependence of the associate hypergraph onthe underlying approximation graph Θ is not shown in the notation H t ( T ) . . Leakage for the Case of Multiple Approximate Guesses We consider the most generic mathematical model so farby allowing the adversary to make multiple guesses aftereach observation of the codeword, and a guess is regardedas successful as long as the estimated sequence is in theneighborhood of the true source sequence.Consider any source coding problem Γ , approximationgraph Θ , and guessing capability function g . Note that werequire function g to be upper bounded as g ( t ) ≤ max T ∈I max (Γ t ) k ( H t ( T )) , ∀ t ∈ Z + , where k ( · ) denotes the covering number (cf. Definition 6) ofa hypergraph.For any sequence length t and valid mapping P Y | X t , the multi-approximate-guess maximal leakage is defined as L t Θ ,g ( P Y | X t ) . = ρ t ( P Y | X t , S Θ ,g ) , (23)where S Θ ,g = {∪ x t ∈ K N (Θ t , x t ) : K ⊆ X t , | K | = g ( t ) } . Then we can define the (optimal) multi-approximate-guessmaximal leakage rate as L Θ ,g . = lim t →∞ t inf P Y | Xt : X t ( y ) ∈I (Γ t ) , ∀ y ∈Y L t Θ ,g ( P Y | X t ) . (24)Once again, following a similar proof to Lemma 4, we canshow the result below. Lemma 6:
For any source coding problem Γ , approxima-tion graph Θ , and guessing capability function g , we have L Θ ,g ≤ L .We have the following lower and upper bounds. Theorem 4:
We have log p f (Θ)max T ∈I max (Γ) k f ( H ( T )) ≤ L Θ ,g ≤ log χ f (Γ) , (25)The proof of the above theorem is given in Appendix G.An interesting observation is that the lower and upperbounds for L Θ ,g are exactly the same as the lower andupper bounds for L Θ , respectively. However, we do not knowwhether L Θ ,g = L Θ holds in general.As specified in Proposition 1, when the number of guessesthe adversary can make does not grow “fast enough” withrespect to t , or, more precisely, when lim t →∞ t log g ( t ) = 0 ,the multi-guess maximal leakage rate indeed equals to themaximal leakage rate. The following lemma states that asimilar equivalence holds even when the adversary allowsapproximation guesses. Proposition 2:
Consider any source coding problem Γ and any approximation graph Θ . For any guessing capabil-ity function g such that lim t →∞ t log g ( t ) = 0 , we have L Θ ,g = L Θ .The proof is similar to that of Proposition 1 and omitted. Suppose for some t we have g ( t ) ≥ max T ∈I max (Γ t ) k ( H t ( T )) . Uponobserving any y , there exists some covering of X t ( y ) with no more than max T ∈I max (Γ t ) k ( H t ( T )) hyperedges, each corresponding to one uniquevertex x t (cf. Definition 1). The adversary can simply choose these x t asits estimator and the probability of successfully guessing will be . A PPENDIX AB ASIC G RAPH -T HEORETIC N OTIONS
Consider a directed, finite, simple, and undirected graph G = ( V, E ) , where V = V ( G ) is the set of vertices of G and E = E ( G ) is the set of edges in G , which is a set of -element subsets of V . A edge { v , v } ∈ E ( G ) means thatvertices v and v are adjacent in the graph G .An independent set of G is a subset of vertices T ⊆ V with no edge among them. An independent set T is said tobe maximal iff there exists no other independent set in G that is a superset of T . For the graph G , let I ( G ) denotethe collection of its independent sets, and let I max ( G ) = { T ∈ I max ( G ) : T (cid:54)⊆ T (cid:48) , ∀ T (cid:48) ∈ I max ( G ) \ { T }} denote thecollection of its maximal independent sets. Also, let α ( G ) denote the independence number of G , i.e., the size of thelargest independent set in G .We review the following basic definitions.A multiset is a collection of elements in which eachelement may occur more than once [23]. The number of timesan element occurs in a multiset is called its multiplicity . Forexample, { a, a, b } is a multiset, where the elements a and b have multiplicities of and , respectively. The cardinalityof a multiset is the summation of the multiplicities of all itselements. Definition 2 (Coloring and chromatic number, [19]):
Givena graph G = ( V, E ) , a coloring of G is a partition of thevertex set V , P = { T , T , . . . , T m } , such that for every j ∈ [ m ] , T j ∈ I ( G ) . The chromatic number of G , denotedby χ ( G ) , is the smallest integer m such that a coloring P = { T , T , . . . , T m } exists for G . Definition 3 ( b -fold coloring and b -fold chromatic number,[19]): Given a graph G = ( V, E ) , a b -fold coloring of G forsome positive integer b is a multiset P = { T , T , . . . , T m } such that for every j ∈ [ m ] , T j ∈ I ( G ) , and every vertex v ∈ V is in exactly b sets in P . The b -fold chromatic numberof G , denoted by χ b ( G ) , is the smallest integer m such thata b -fold coloring P = { T , T , . . . , T m } exists for G . Definition 4 (Fractional chromatic number, [19]):
Given agraph G = ( V, E ) , the fractional chromatic number χ f ( G ) isdefined as χ f ( G ) = inf b χ b ( G ) b = lim b →∞ χ b ( G ) b , where the second equality follows from the subadditivity of χ b ( G ) in b and Fekete’s Lemma [24]. Definition 5 (Hypergraph, [19]):
A hypergraph H consistsof a vertex set V ( H ) and a hyperedge set E ( H ) , which is afamily of subsets of V ( H ) .It can be seen that every graph is a special hypergraphwhose hyperedges are all of cardinality .For the counterparts of Definitions 2-4 for hypergraphs,see the following. Definition 6 (Covering and covering number, [19]):
Givena hypergraph H = ( V, E ) , a covering of H is a set of itshyperedges, P = { E , E , . . . , E m } where E p ∈ E , ∀ p ∈ [ m ] , such that V = ∪ p ∈ [ m ] E p . The covering number of , denoted as k ( H ) , is the smallest integer m such that acovering P = { E , E , . . . , E m } exists for H . Definition 7 ( b -fold covering and b -fold covering number,[19]): Given a hypergraph H = ( V, E ) , a b -fold covering of H is a multiset P = { E , E , . . . , E m } where E p ∈ E , ∀ p ∈ [ m ] , such that every v ∈ V is in at least b sets in P . The b -fold covering number of H , denoted as k b ( H ) , is the smallestinteger m such that a b -fold covering P = { E , E , . . . , E m } exists for H . Definition 8 (Fractional Covering number, [19]):
Givena hypergraph H = ( V, E ) , the fractional covering number k f ( H ) is defined as k f ( H ) = inf b k b ( H ) b = lim b →∞ k b ( H ) b , where the second equality follows from the subadditivity of k b ( H ) in b and Fekete’s Lemma [24].A PPENDIX BP ROOF OF L EMMA Proof:
Consider an arbitrary P Y | X t . We keep mergingany two mergeable codewords till we reach some mappingscheme P Y (cid:48) | X t with alphabet Y (cid:48) such that any two codewords y , y ∈ Y (cid:48) are not mergeable. According to Lemma 1, theleakage induced by P Y (cid:48) | X t is always no larger than that by P Y | X t . Hence, to prove the lemma, it suffices to show thatthere exists some mapping scheme Q ˜ Y | X t with code alphabet ˜ Y = I max (Γ t ) such that the leakage induced by Q ˜ Y | X t isno larger than that by P Y (cid:48) | X t , i.e., L t ( Q ˜ Y | X t ) ≤ L t ( P Y (cid:48) | X t ) . (26)To show (26), we construct Q ˜ Y | X t as follows. For everycodeword y ∈ Y (cid:48) of the mapping P Y (cid:48) | X t , there exists some T ∈ I max (Γ t ) such that X tP Y (cid:48)| Xt ( y ) ⊆ T, X tP Y (cid:48)| Xt ( y (cid:48) ) (cid:54)⊆ T, ∀ y (cid:48) ∈ Y (cid:48) \ { y } , (27)since otherwise y and y (cid:48) are mergeable. Hence, it can beverified that there exists some ˜ Y ⊆ I max (Γ t ) such that forevery y ∈ Y (cid:48) there exists one and only one T ∈ ˜ Y satisfying(27). For every T ∈ ˜ Y , let y ( T ) to be the unique codewordin Y (cid:48) such that X tP Y (cid:48)| Xt ( y ( T )) ⊆ T . Then, for any x t ∈ X t , T ∈ ˜ Y , set Q ˜ Y | X t ( T | x t ) = P Y (cid:48) | X t ( y ( T ) | x t ) . It can be easily verified that Q ˜ Y | X t is a valid mappingscheme.Then, we have L t ( Q ˜ Y | X t ) = log (cid:88) T ∈ ˜ Y max x t ∈X t Q ˜ Y | X t ( T | x t )= log (cid:88) y ( T ): T ∈ ˜ Y max x t ∈X t P Y (cid:48) | X t ( y ( T ) | x t )= L t ( P Y (cid:48) | X t ) , which indicates (26), thus completing the proof. A PPENDIX CP ROOF OF (14)We first prove η ≤ /χ f (Γ t ) . As η is the solution to (12),there exists some ≤ κ T ≤ , T ∈ I max (Γ t ) so that min x t ∈X t (cid:88) T ∈I max (Γ t ): x t ∈ T κ T = η, (28) (cid:88) T ∈I max (Γ t ) κ T = 1 . (29)Construct λ T = κ T /η for every T ∈ I max (Γ t ) . We showthat λ T ∈ [0 , for any T ∈ I max (Γ t ) . As κ T ≥ andit can be easily verified that η > , it is obvious that λ T ≥ . In the following we show that λ T ≤ , whichis equivalent to showing that κ T ≤ η , by contradictionas follows. Let x t ∆ be the vertex in X t that achieves theminimum in min x t ∈X t (cid:80) T ∈I max (Γ t ): x t ∈ T κ T . Thus, we have η = (cid:80) T ∈T ∆ κ T , where T ∆ = { T ∈ I max (Γ) : x t ∆ ∈ T } denotes the set of maximal independent sets containing x t ∆ .Assume there exists some T ∈ I max (Γ t ) such that κ T > η .Clearly, T / ∈ T ∆ , or equivalently, x t ∆ / ∈ T . We construct κ (cid:48) T , T ∈ I max (Γ t ) as: κ (cid:48) T = (cid:40) η + κ T − η |I max (Γ t ) | , if T = T ,κ T + κ T − η |I max (Γ t ) | , otherwise . (30)To verify that κ (cid:48) T , T ∈ I max (Γ) satisfies the constraint in(12b), we have (cid:88) T ∈I max (Γ t ) κ (cid:48) T = κ (cid:48) T + (cid:88) T ∈I max (Γ t ): T (cid:54) = T κ (cid:48) T = ( η + κ T − η |I max (Γ t ) | ) + (cid:88) T ∈I max (Γ t ): T (cid:54) = T ( κ T + κ T − η |I max (Γ t ) | )= η + |I max (Γ t ) | · κ T − η |I max (Γ t ) | + (cid:88) T ∈I max (Γ t ): T (cid:54) = T κ T = κ T + (cid:88) T ∈I max (Γ t ): T (cid:54) = T κ T = 1 , where the second equality follows from (30). It can also beverified that for any T ∈ I max (Γ t ) , κ (cid:48) T ∈ [0 , , thus satis-fying (12c). So ( κ (cid:48) T : T ∈ I max (Γ t )) is a valid assignmentsatisfying the constraints in the optimization problem (12).Note that we have κ (cid:48) T > κ T for any T ∈ I max (Γ t ) \ T , and κ (cid:48) T < κ T . Consider any x t ∈ X t . If x t / ∈ T , then we have (cid:88) T ∈I max (Γ t ): x t ∈ T κ (cid:48) T > (cid:88) T ∈I max (Γ t ): x t ∈ T κ T ≥ min ˜ x t ∈X t (cid:88) T ∈I max (Γ t ):˜ x t ∈ T κ T = η If x t ∈ T , then we have (cid:88) T ∈I max (Γ t ): x t ∈ T κ (cid:48) T ≥ κ (cid:48) T = η + κ T − η |I max (Γ t ) | > η. ence, we can conclude that with ( κ (cid:48) T : T ∈ I max (Γ t )) ,the objective value in (12a) is strictly larger than η , whichcontradicts the fact that η is the solution to optimizationproblem (12). Therefore, the assumption that there existssome T ∈ I max (Γ t ) such that κ T > η must not be true,and subsequently, for every T ∈ I max (Γ t ) , λ T ≤ . Inconclusion, we know that ( λ T : T ∈ I max (Γ t )) satisfiesthe constraint in (13c). For any x t ∈ X t , by (28), we have (cid:88) T ∈I max (Γ t ): x t ∈ T λ T ≥ η min v t ∈X t (cid:88) T ∈I max (Γ t ): v t ∈ T κ T = 1 , and thus we know that ( λ T : T ∈ I max (Γ t )) satisfies theconstraints in (13b). Therefore, ( λ T : T ∈ I max (Γ t )) is avalid assignment satisfying the constraints in the optimizationproblem (13), with which the objective in (13a) becomes (cid:88) T ∈I max (Γ t ) λ T = 1 η (cid:88) T ∈I max (Γ t ) κ T = 1 η , where the second equality follows from (29). Since χ f (Γ t ) is the solution to the optimization problem (13), we canconclude that χ f (Γ t ) ≤ /η, which is equivalent to η ≤ /χ f (Γ t ) . (31)The opposite direction η ≥ /χ f (Γ t ) can be proved ina similar manner as follows. As χ f (Γ t ) is the solution tothe optimization problem (13), there exists some ≤ λ T ≤ , T ∈ I max (Γ t )) such that (cid:88) T ∈I max (Γ t ) λ T = χ f (Γ t ) , (32) (cid:88) T ∈I max (Γ t ): x t ∈ T λ T ≥ , ∀ x t ∈ X t . (33)Construct κ T = λ T /χ f (Γ t ) for any T ∈ I max (Γ t ) . We knowthat ( κ T : T ∈ I max(Γ t ) ) satisfies the constraints in (12c) dueto the simple fact that the factional chromatic number of anygraph is no less than . By (32), we know that ( κ T , T ∈I max (Γ t )) satisfy the constraint in (12b) as (cid:88) T ∈I max (Γ t ) κ T = 1 χ f (Γ t ) (cid:88) T ∈I max (Γ t ) λ T = 1 . Therefore, ( κ T : T ∈ I max (Γ t )) is a valid assignmentsatisfying the constraints in the optimization problem (12),with which we have min x t ∈X t (cid:88) T ∈I max (Γ t ): x t ∈ T κ T = 1 χ f (Γ t ) min x t ∈X t (cid:88) T ∈I max (Γ t ): x t ∈ T λ T ≥ χ f (Γ t ) , where the inequality follows from (33). That is, the objectivein (12a) is no smaller than /χ f (Γ t ) . Since η is the solutionto the optimization problem (12), we can conclude that η ≥ χ f (Γ t ) . (34)Combining (31) and (34) yields (14). A PPENDIX DP ROOF OF T HEOREM Proof:
The upper bound comes immediately from The-orem 1 and Lemma 4. It remains to show the lower bound.Consider any t , any P X , and any valid P Y | X t . We have (cid:88) y ∈Y max K ⊆X t : | K |≤ g ( t ) (cid:88) x t ∈ K P X t ,Y ( x t , y ) ( a ) = (cid:88) y ∈Y max K ⊆X t ( y ): | K |≤ g ( t ) (cid:88) x t ∈ K P X t ,Y ( x t , y ) ≥ (cid:88) y ∈Y (cid:80) K ⊆X t ( y ): | K | = g ( t ) − (cid:80) x t ∈ K P X t ,Y ( x t , y ) |{ K ⊆ X t ( y ) : | K | = g ( t ) − }| ( b ) = (cid:88) y ∈Y (cid:0) |X t ( y ) |− g ( t ) − − (cid:1) (cid:80) x t ∈X t ( y ) P X t ,Y ( x t , y ) (cid:0) |X t ( y ) | g ( t ) − (cid:1) = (cid:88) y ∈Y g ( t ) − |X t ( y ) | P Y ( y ) ( c ) ≥ g ( t ) α (Γ t ) (cid:88) y ∈Y P Y ( y ) = g ( t ) α (Γ t ) , where g ( t ) − = min { g ( t ) , |X t ( y ) |} , and • (a) follows from the fact that P X t ,Y ( x t , y ) = 0 for any x t / ∈ X t ( y ) according to (1); • (b) follows from that each x t ∈ X t ( y ) appears inexactly (cid:0) |X t ( y ) |− g ( t ) − − (cid:1) subsets of X t ( y ) of size g ( t ) − ; • (c) follows from the fact for any y ∈ Y , we alwayshave |X t ( y ) | ≤ α (Γ t ) as a direct consequence of (2)and thus1) if g ( t ) ≤ |X t ( y ) | , then g ( t ) − |X t ( y ) | = g ( t ) |X t ( y ) | ≥ g ( t ) α (Γ t ) ,2) otherwise we have g ( t ) > |X t ( y ) | and g ( t ) − |X t ( y ) | =1 ≥ g ( t ) α (Γ t ) , where the last inequality is due to theassumption that g ( t ) ≤ α (Γ t ) .Therefore, we have L g = lim t →∞ t inf P Y | Xt : X t ( y ) ∈I (Γ t ) , ∀ y ∈Y log max P X (cid:80) y ∈Y max K ⊆X t : | K |≤ g ( t ) (cid:80) x t ∈ K P X t ,Y ( x t , y )max K ⊆X t : | K |≤ g ( t ) (cid:80) x t ∈ K P X t ( x t ) ≥ lim t →∞ t log max P X g ( t ) α (Γ t ) max K ⊆X t : | K |≤ g ( t ) (cid:80) x t ∈ K P X t ( x t ) ≥ lim t →∞ t log max P X g ( t ) α (Γ t ) g ( t ) · max x t ∈X t P X t ( x t ) ( c ) = lim t →∞ t log |X t | α (Γ t ) ( d ) = log |X | α (Γ) , where (c) follows from the fact that max P X x t ∈X t P X t ( x t ) ≤ (cid:80) xt ∈X t P Xt ( x t ) |X t | = |X t | , hich holds with equality if and only if X is uniformlydistributed over X , and (d) follows from the fact that |X t | = |X | t and α (Γ t ) = α (Γ ∨ t ) = α (Γ) t .A PPENDIX EP ROOF OF P ROPOSITION Proof:
We write out L tg ( P Y | X t ) defined in (17) as L tg ( P Y | X t ) = log max P X (cid:80) y ∈Y max K ⊆X t : | K |≤ g ( t ) (cid:80) x t ∈ K P X t ,Y ( x t , y )max K ⊆X t : | K |≤ g ( t ) (cid:80) x t ∈ K P X t ( x t ) . We consider the fraction in the above equality. The numer-ator can be bounded as (cid:88) y ∈Y max x t ∈X t P X t ,Y ( x t , y ) ≤ (cid:88) y ∈Y max K ⊆X t : | K |≤ g ( t ) (cid:88) x t ∈ K P X t ,Y ( x t , y ) ≤ g ( t ) · (cid:88) y ∈Y max x t ∈X t P X t ,Y ( x t , y ) , and the denominator can be bounded as max x t ∈X t P X t ( x t ) ≤ max K ⊆X t : | K |≤ g ( t ) (cid:88) x t ∈ K P X t ( x t ) ≤ g ( t ) · max x t ∈X t P X t ( x t ) Therefore, we have L g = lim t →∞ t inf P Y | Xt : X t ( y ) ∈I max (Γ t ) , ∀ y ∈Y L tg ( P Y | X t ) ≥ lim t →∞ t inf P Y | Xt : X t ( y ) ∈I max (Γ t ) , ∀ y ∈Y log max P X (cid:80) y ∈Y max x t ∈X t P X t ,Y ( x t , y ) g ( t ) · max x t ∈X t P X t ( x t )= L − lim t →∞ t log g ( t ) = L , and L g = lim t →∞ t inf P Y | Xt : X t ( y ) ∈I max (Γ t ) , ∀ y ∈Y L tg ( P Y | X t ) ≤ lim t →∞ t inf P Y | Xt : X t ( y ) ∈I max (Γ t ) , ∀ y ∈Y log max P X g ( t ) · (cid:80) y ∈Y max x t ∈X t P X t ,Y ( x t , y )max x t ∈X t P X t ( x t )= L + lim t →∞ t log g ( t ) = L . Combining the above results completes the proof. A
PPENDIX FP ROOF OF T HEOREM
Lemma 7 ( [19]):
Consider any maximal independent set T ∈ I max (Γ t ) . We have T = T × T × · · · × T t for some T j ∈ I max (Γ) , ∀ j ∈ [ t ] (i.e., for every j , T j is a maximalindependent set in Γ ). Lemma 8:
For any x t = ( x , x , . . . , x t ) ∈ V (Θ t ) = X t , we have N (Θ t , x t ) = N (Θ , x ) × N (Θ , x ) ×· · · × N (Θ , x t ) . Subsequently, we have P X t ( N (Θ t , x t )) = (cid:81) j ∈ [ t ] P X ( N (Θ , x j )) . Proof:
Consider any v t = ( v , v , . . . , v t ) ∈ N (Θ t , x t ) .According to the definition of Θ t , we know that for every j ∈ [ t ] , v j = x j or { v j , x j } ∈ E (Θ) . Hence, for every j ∈ [ t ] , v j ∈ N (Θ , x j ) , and thus v t = ( v , . . . , v t ) ∈ N (Θ , x ) ×· · · × N (Θ , x t ) . Therefore, we know that N (Θ t , x t ) ⊆ N (Θ , x ) × · · · × N (Θ , x t ) .Now we show the opposite direction. Consider any v t =( v , v , . . . , v t ) ∈ N (Θ , x ) × · · · × N (Θ , x t ) . We have v j ∈ N (Θ , x j ) for every j ∈ [ t ] . That is, v j = x j or { v j , x j } ∈E (Θ) for every j ∈ [ t ] . Then, by the definition of Θ t , v t = x t or { v t , x t } ∈ E (Θ t ) , and thus v t ∈ N (Θ t , x t ) . Therefore, N (Θ , x ) × · · · × N (Θ , x t ) ⊆ N (Θ t , x t ) .In conclusion, we have N (Θ t , x t ) = N (Θ , x ) × · · · × N (Θ , x t ) . It remains to show that P X t ( N (Θ t , x t )) = (cid:81) j ∈ [ t ] P X ( N (Θ , x j )) . Towards that end, for every j ∈ [ t ] ,denote N (Θ , x j ) as { z j, , z j, , . . . , z j, | N (Θ ,x j ) | } . We have P X t ( N (Θ t , x t ))= P X t ( N (Θ , x ) × · · · × N (Θ , x t ))= P X t − ( N (Θ , x ) × · · · × N (Θ , x t − )) · P X ( z t, )+ P X t − ( N (Θ , x ) × · · · × N (Θ , x t − )) · P X ( z t, ) + · · · + P X t − ( N (Θ , x ) × · · · × N (Θ , x t − )) · P X ( z t, | N (Θ ,x t ) | )= P X t − ( N (Θ , x ) × · · · × N (Θ , x t − )) · P X ( N (Θ , x t ))= P X t − ( N (Θ , x ) × · · · × N (Θ , x t − )) · P X ( N (Θ , x t − )) · P X ( N (Θ , x t ))= · · · = (cid:89) j ∈ [ t ] P X ( N (Θ , x j )) , which completes the proof. Lemma 9:
Consider any maximal independent set T = T × T × · · · × T t ∈ I max (Γ t ) and any x t =( x , x , · · · , x t ) ∈ X t . We have T ∩ N (Θ t , x t ) = (cid:89) j ∈ [ t ] ( T j ∩ N (Θ , x j )) . Proof:
Consider any v t = ( v , . . . , v t ) ∈ T ∩ N (Θ t , x t ) .We know v t ∈ T and v t ∈ N (Θ t , x t ) , which, together withLemmas 7 and 8, indicate that for every j ∈ [ t ] , v j ∈ T j and v j ∈ N (Θ , x j ) . Hence, v j ∈ T j ∩ N (Θ , x j ) for every j ∈ [ t ] and thus v t ∈ ( T ∩ N (Θ , x )) ×· · ·× ( T t ∩ N (Θ , x t )) . There-fore, we know that T ∩ N (Θ t , x t ) ⊆ (cid:81) j ∈ [ t ] ( T j ∩ N (Θ , x j )) .ow we show the opposite direction. Consider any v t =( v , . . . , v t ) ∈ ( T ∩ N (Θ , x )) × · · · × ( T t ∩ N (Θ , x t )) . Weknow that for every j ∈ [ t ] , v j ∈ T j and v j ∈ N (Θ , x j ) .Hence, by Lemmas 7 and 8, v t ∈ T and v t ∈ N (Θ t , x t ) ,and thus v t ∈ T ∩ N (Θ t , x t ) . Therefore, we know that (cid:81) j ∈ [ t ] ( T j ∩ N (Θ , x j )) ⊆ T ∩ N (Θ t , x t ) .In conclusion, we have T ∩ N (Θ t , x t ) = (cid:81) j ∈ [ t ] ( T j ∩ N (Θ , x j )) . Lemma 10:
Consider any maximal independent set T = T × T × · · · × T t ∈ I max (Γ t ) . For every j ∈ [ t ] , let P j = { E j, , E j, , . . . , E j,m j } be an arbitrary b j -fold covering ofthe hypergraph H ( T j ) , where for every i ∈ [ m j ] , set E j,i denotes the intersection of T j and the neighbors of somevertex x j,i . That is, E j,i = T j ∩ N (Θ , x j,i ) . Then we knowthat P = (cid:89) j ∈ [ t ] P j = { E , , . . . , E ,m } × · · · × { E t, , . . . , E t,m t } is a valid (cid:81) j ∈ [ t ] b j -fold covering of the hypergraph H t ( T ) with cardinality |P| = (cid:81) j ∈ [ t ] m j . Proof:
The proof can be decomposed into two parts:(I) we show that every element of P is a hyperedge ofthe hypergraph H t ( T ) ; (II) we show that every vertex x t in V ( H t ( T )) = T appears in at least (cid:81) j ∈ [ t ] b j sets in P .To show part (I), without loss of generality, consider theset E = E , × E , × · · · × E t, , which is an element of P . Recall that for every j ∈ [ t ] , E j, = T j ∩ N (Θ , x j, ) forsome x j, ∈ X . By Lemma 9 we have E = ( T ∩ N (Θ , x , )) × · · · × ( T t ∩ N (Θ , x t, ))= T ∩ N (Θ t , ( x , , . . . , x t, )) . Hence, one can see that set E is indeed a hyperedge of H t ( T ) (cf. Definition 1).To show part (II), consider any x t = ( x , . . . , x t ) ∈ T .Since for every j ∈ [ t ] , P j is a b j -fold covering of H ( T j ) ,we know that vertex x j ∈ T j appears in at least b j setswithin P j . Therefore, x t appears in at least (cid:81) j ∈ [ t ] b j sets in P = (cid:81) j ∈ [ t ] P j .We are ready to show the lower bound in Theorem 3. Proof of Theorem 3:
Consider any sequence length t ,valid mapping P Y | X t , and source distribution P X .Consider any codeword y ∈ Y and any maximal indepen-dent set T = T × · · · × T t ∈ I max (Γ t ) such that X t ( y ) ⊆ T .For every j ∈ [ t ] , let P j = { E j, , . . . , E j,m j } be the k f ( H ( T j )) -achieving b j -fold covering of the hypergraph H T j . Note that the existence of such P j for some finiteinteger b j is guaranteed by the fact that H ( T j ) has noexposed vertices (i.e., every vertex of H ( T j ) is in at leastone hyperedge of H ( T j ) ) [19, Corollary 1.3.2] .Construct P = (cid:81) j ∈ [ t ] P j . By Lemma 10, we know that P is a ( (cid:81) j ∈ [ t ] b j ) -fold covering of H t ( T ) and that |P| = (cid:81) j ∈ [ t ] m j . Note that we have (cid:81) j ∈ [ t ] m j (cid:81) j ∈ [ t ] b j = (cid:89) j ∈ [ t ] k f ( H ( T j )) . (35) Recall that for any integer b ≥ , any b -fold covering of a hypergraphis a multiset. Recall that every element of P is a hyperedge of H t ( T ) .Also recall that by Definition 1 every hyperedge E of H t ( T ) equals to T ∩ N (Θ t , x t ) for some x t ∈ X t . Then we have max x t ∈X t (cid:88) ˜ x t ∈ N (Θ t ,x t ) P X t ,Y (˜ x t , y ) ( a ) = max x t ∈X t (cid:88) ˜ x t ∈ T ∩ N (Θ t ,x t ) P X t ,Y (˜ x t , y )= max E ∈E ( H t ( T )) (cid:88) x t ∈ E P X t ,Y (˜ x t , y ) ≥ |P| (cid:88) E ∈P (cid:88) ˜ x t ∈ E P X t ,Y (˜ x t , y ) ( b ) ≥ (cid:81) j ∈ [ t ] m j ( (cid:89) j ∈ [ t ] b j ) (cid:88) x t ∈ T P X t ,Y ( x t , y ) ( c ) = 1 (cid:81) j ∈ [ t ] k f ( H ( T j )) P Y ( y ) ≥ P Y ( y )(max T ∈I max (Γ) k f ( H ( T ))) t , (36)where (a) follows from the fact that P X t ,Y ( x t , y ) = 0 forany x t ∈ X t \ T ⊆ X t \ X t ( y ) according to (1), (b)follows from the fact that |P| = (cid:81) j ∈ [ t ] m j and that every x t ∈ V ( H t ( T )) = T appears in at least (cid:81) j ∈ [ t ] b j hyperedgeswithin P , and (c) follows from (35).Therefore, we have L Θ = lim t →∞ t inf P Y | Xt : X t ( y ) ∈I (Γ t ) , ∀ y ∈Y log max P X (cid:80) y ∈Y max x t ∈X t (cid:80) ˜ x t ∈ N (Θ t ,x t ) P X t ,Y (˜ x t , y )max x t ∈X t (cid:80) ˜ x t ∈ N (Θ t ,x t ) P X t (˜ x t ) ( d ) ≥ lim t →∞ t log max P X (cid:80) y ∈Y P Y ( y )(max T ∈I max(Γ) k f ( H ( T ))) t max x t ∈X t P X t ( N (Θ t , x t )) ( e ) = lim t →∞ t log max P X (cid:80) y ∈Y P Y ( y )(max T ∈I max(Γ) k f ( H ( T ))) t (max x ∈X P X ( N (Θ , x ))) t = log 1max T ∈I max (Γ) k f ( H ( T )) + log max P X min x ∈X P X ( N (Θ , x ))= log 1max T ∈I max (Γ) k f ( H ( T )) + log 1min P X max x ∈X P X ( N (Θ , x )) , where (d) follows from (36) and (e) is due to Lemma 8.It remains to show that p f (Θ) = 1min P X max x ∈X P X ( N (Θ , x )) , or equivalently, p f (Θ) = 1 /τ , where τ is the solution to thefollowing optimization problem:minimize max x ∈X (cid:88) ˜ x ∈ N (Θ ,x ) P X (˜ x ) , (37a)subject to (cid:88) x ∈X P X ( x ) = 1 , (37b) X ( x ) ∈ [0 , , ∀ x ∈ X . (37c)Recall that p f (Θ) denotes the fractional closed neighborhoodpacking number of Θ , which is the solution to the followinglinear program [19, Section 7.4]:maximize (cid:88) x ∈X λ ( x ) , (38a)subject to (cid:88) ˜ x ∈ N (Θ ,x ) λ (˜ x ) ≤ , ∀ x ∈ X , (38b) λ ( x ) ∈ [0 , , ∀ x ∈ X . (38c)Using similar techniques to the proof of Theorem 1, we canshow that the solutions to (37) and (38) are reciprocal to eachother. That is, p f (Θ) = 1 /τ , which completes the proof ofTheorem 3. A PPENDIX GP ROOF OF T HEOREM Proof:
Throughout the proof, we use the shorthandnotation k f = max T ∈I max (Γ) k f ( H ( T )) . (39)The upper bound in Theorem 4 immediately follows fromTheorem 1 and Lemma 6. It remains to show the lowerbound.Consider any sequence length t , any source distribution P X and any valid mapping P Y | X t . Consider any codeword y ∈ Y . There exists some T ∈ I max (Γ t ) such that X t ( y ) ⊆ T . By Lemma 7, we have T = T × T × · · · × T t where T j ∈ I max (Γ) , ∀ j ∈ [ t ] . For every j ∈ [ t ] , let P j = { E j, , . . . , E j,m j } be the k f ( H ( T j )) -achieving b j -fold covering of the hypergraph H ( T j ) .Construct P = (cid:81) j ∈ [ t ] P j . Then |P| = (cid:81) j ∈ [ t ] m j . ByLemma 10, we know that P is a ( (cid:81) j ∈ [ t ] b j ) -fold coveringof H t ( T ) . Note that P is a multiset (cf. Appendix A). Set m = (cid:81) j ∈ [ t ] m j and b = (cid:81) j ∈ [ t ] b j . Then P is a b -foldcovering of H t ( T ) of cardinality m , and we have mb = (cid:81) j ∈ [ t ] m j (cid:81) j ∈ [ t ] b j = (cid:89) j ∈ [ t ] k f ( H ( T j )) . (40)We assume that b ≥ g ( t ) without loss of generality. For every hyperedge of H t ( T ) in the covering P , de-noted by E , there is a corresponding x t ∈ X t such that E = T ∩ N (Θ t , x t ) . Let X t ( P ) denote the collection of thecorresponding x t of those hyperedges in P . More precisely,define X t ( P ) as the multiset of x t whose correspondinghyperedge E = T ∩ N (Θ t , x t ) appears in the fractionalcovering P , while setting the multiplicity of any x t ∈ X t ( P ) the same as that of its corresponding hyperedge in P . Thus |X t ( P ) | = |P| = m .Let K = { K ⊆ X t ( P ) : | K | = g ( t ) } denote the collectionof subsets of X t ( P ) that is of cardinality g ( t ) . Hence |K| = (cid:0) mg ( t ) (cid:1) . Note that any K ∈ K is also a multiset. If b < g ( t ) , we can simply construct a cb -fold covering of H t ( T ) , P c ,from P by repeating it c times, for some sufficiently large integer c suchthat cb ≥ g ( t ) . Then the remaining proof will be based on P c . For any K ∈ K , and any v t ∈ T , let m ( K, v t ) denotethe number of elements x t in K whose neighborhood in Θ t contains v t . That is m ( K, v t ) = |{ x t ∈ K : v t ∈ N (Θ t , x t ) }| . Then for any K ∈ K and v t ∈ T , we have ≤ m ( K, v t ) ≤ | K | = g ( t ) . For any v t ∈ T , it appears in at least b hyperedges in P .Assume v t appears in b ( v t ) hyperedges in P . Thus b ( v t ) ≥ b ≥ g ( t ) . (41)Also, define the shorthand notation N (Θ t , K ) = ∪ x t ∈ K N (Θ t , x t ) .We have max K ⊆X t : | K |≤ g ( t ) (cid:88) x t ∈ N (Θ t ,K ) P X t ,Y ( x t , y ) ( a ) = max K ⊆X t : | K |≤ g ( t ) (cid:88) x t ∈∪ ˜ xt ∈ K ( N (Θ t , ˜ x t ) ∩ T ) P X t ,Y ( x t , y ) ≥ |K| (cid:0) (cid:88) K ∈K (cid:88) x t ∈∪ ˜ xt ∈ K ( N (Θ t , ˜ x t ) ∩ T ) P X t ,Y ( x t , y ) (cid:1) = 1 |K| (cid:0) (cid:88) K ∈K (cid:88) v t ∈ T : m ( K,v t ) ≥ P X t ,Y ( v t , y ) (cid:1) = 1 |K| (cid:0) (cid:88) v t ∈ T P X t ,Y ( v t , y )( (cid:88) K ∈K : m ( K,v t ) ≥ (cid:1) = 1 |K| (cid:0) (cid:88) v t ∈ T P X t ,Y ( v t , y )( (cid:88) (cid:96) ∈ [ g ( t )] (cid:88) K ∈K : m ( K,v t )= (cid:96) (cid:1) = 1 |K| (cid:0) (cid:88) v t ∈ T P X t ,Y ( v t , y ) (cid:88) (cid:96) ∈ [ g ( t )] (cid:18) b ( v t ) (cid:96) (cid:19)(cid:18) m − b ( v t ) g ( t ) − (cid:96) (cid:19)(cid:1) ( b ) = 1 |K| (cid:0) (cid:88) v t ∈ T P X t ,Y ( v t , y )( (cid:18) mg ( t ) (cid:19) − (cid:18) m − b ( v t ) g ( t ) (cid:19) ) (cid:1) ( c ) ≥ (1 − ( m − bm ) g ( t ) ) (cid:88) v t ∈ T P X t ,Y ( v t , y )= (1 − ( m − bm ) g ( t ) ) P Y ( y ) ( d ) ≥ (1 − (1 − ( 1 k f ) t ) g ( t ) ) P Y ( y ) , (42)where • (a) follows from the fact that P X t ,Y ( x t , y ) = 0 for any x t ∈ X t \ T ⊆ X t \ X t ( y ) according to (1); • (b) can be shown by considering a specific way ofchoosing g ( t ) elements from a set, denoted by M , ofcardinality | M | = m , which is described as follows.We arbitrarily pick b ( v t ) elements from the set M ,the collection of which is denoted by B . Recall that | B | = b ( v t ) ≥ g ( t ) as specified in (41). We observethat to choose g ( t ) elements from the set M , thenumber of chosen elements from the subset B mustbe some integer (cid:96) from to g ( t ) . For each possible (cid:96) ,the number of ways in which b ( v t ) elements can behosen from M is (cid:0) b ( v t ) (cid:96) (cid:1)(cid:0) m − b ( v t ) g ( t ) − (cid:96) (cid:1) . Therefore, the totalnumber of ways to choose g ( t ) elements from M is (cid:80) (cid:96) ∈ [0: g ( t )] (cid:0) b ( v t ) (cid:96) (cid:1)(cid:0) m − b ( v t ) g ( t ) − (cid:96) (cid:1) , which should be equal to (cid:0) mg ( t ) (cid:1) . Therefore, we have (cid:80) (cid:96) ∈ [ g ( t )] (cid:0) b ( v t ) (cid:96) (cid:1)(cid:0) m − b ( v t ) g ( t ) − (cid:96) (cid:1) = (cid:0) mg ( t ) (cid:1) − (cid:0) b ( v t )0 (cid:1)(cid:0) m − b ( v t ) g ( t ) − (cid:1) = (cid:0) mg ( t ) (cid:1) − (cid:0) m − b ( v t ) g ( t ) (cid:1) ; • (c) follows from the following derivation: |K| ( (cid:18) mg ( t ) (cid:19) − (cid:18) m − b ( v t ) g ( t ) (cid:19) )= 1 (cid:0) mg ( t ) (cid:1) ( (cid:18) mg ( t ) (cid:19) − (cid:18) m − b ( v t ) g ( t ) (cid:19) )= 1 − (cid:89) i ∈ [0: g ( t ) − m − b ( v t ) − im − i ≥ − (cid:89) i ∈ [0: g ( t ) − m − b ( v t ) m = 1 − ( m − b ( v t ) m ) g ( t ) ≥ − ( m − bm ) g ( t ) , where the last inequality follows from (41); • (d) follows from (39) and (40).Given (42), it remains to further lower-bound the term lim t →∞ t log(1 − (1 − ( 1 k f ) t ) g ( t ) ) . Define σ = lim t →∞ t log g ( t ) . Due to Proposition 2, it suffices to consider only the casewhen σ > and subsequently σ > .Consider any positive real number < m < σ .We have < σ − log m = lim t →∞ t log g ( t ) m t , which indicates that lim t →∞ g ( t ) > lim t →∞ m t . (43)Towards bounding lim t →∞ log t (1 − (1 − ( k f ) t ) g ( t ) ) , wefirst show that m < k f , (44)which is equivalent to showing that σ ≤ log k f . Towards thatend, we have σ = lim t →∞ log g ( t ) t ( e ) ≤ lim t →∞ log max T ∈I max (Γ t ) k ( H t ( T )) t ( f ) ≤ lim t →∞ log(1 + log e t ) max T ∈I max (Γ t ) k f ( H t ( T )) t ( g ) = lim t →∞ log(1 + log e t )( k f ) t t ( h ) = lim t →∞ e t )( k f ) t (ln k f ) + ( k f ) t e ( k f ) t (1 + e t ) ( i ) = ln k f ln 2 = log k f , (45)where • (e) follows from the assumption that g ( t ) ≤ max T ∈I max (Γ) k ( H t ( T )) ; • (f) follows from [19, Lemma 1.6.4] with e =max T ∈I max (Γ) ,x ∈X | T ∩ N (Θ , x ) | ; • (g) follows from the fact that for any T = T ×· · ·× T t ∈I max (Γ t ) , we have k f ( H t ( T )) = (cid:81) j ∈ [ t ] k f ( H ( T j )) ,which can be shown using Lemma 9 and [19, Theorem1.6.1]. • (h) follows from L’Hˆopital’s rule; • (i) follows from the fact that lim t →∞ ( k f ) t e ( k f ) t (1+ e t ) = 0 .Next, we have lim t →∞ t log(1 − (1 − ( 1 k f ) t ) g ( t ) ) ( j ) > lim t →∞ t log(1 − (1 − ( 1 k f ) t ) m t )= lim t →∞ t log (cid:16) (1 − (1 − ( 1 k f ) t ))( (cid:88) j ∈ [0: m t − (1 − ( 1 k f ) t ) j ) (cid:17) = log 1 k f + lim t →∞ t log (cid:88) j ∈ [0: m t − (1 − ( 1 k f ) t ) j ≥ log 1 k f + lim t →∞ t log( m t · (1 − ( 1 k f ) t ) m t − )= log 1 k f + log m + lim t →∞ ( m t −
1) log(1 − ( k f ) t ) t ( k ) = log 1 k f + log m, (46)where (j) follows from (43), and (k) follows from the factthat lim t →∞ ( m t −
1) log(1 − ( kf ) t ) t = 0 , which in turn followsfrom (44).Finally, as (46) holds for any positive m < σ , we have lim t →∞ t log(1 − (1 − ( 1 k f ) t ) g ( t ) ) ≥ log 1 k f + σ. (47)Combining (42) and (47), we have L Θ ,g = lim t →∞ t inf P Y | Xt : X t ( y ) ∈I (Γ t ) , ∀ y ∈Y log max P X (cid:80) y ∈Y max K ⊆X t : | K |≤ g ( t ) (cid:80) x t ∈ N (Θ t ,K ) P X t ,Y ( x t , y )max K ⊆X t : | K |≤ g ( t ) (cid:80) x t ∈ N (Θ t ,K ) P X t ( x t ) ≥ lim t →∞ t log max P X (cid:80) y ∈Y (1 − (1 − ( k f ) t ) g ( t ) ) P Y ( y )max K ⊆X t : | K |≤ g ( t ) (cid:80) x t ∈ N (Θ t ,K ) P X t ( x t )= lim t →∞ t log max P X − (1 − ( k f ) t ) g ( t ) max K ⊆X t : | K |≤ g ( t ) (cid:80) x t ∈ N (Θ t ,K ) P X t ( x t ) ≥ lim t →∞ t log max P X − (1 − ( k f ) t ) g ( t ) g ( t ) · (max x ∈X P X ( N (Θ , x ))) t log 1min P X max x ∈X P X ( N (Θ , x )) + lim t →∞ t log 1 g ( t ) +lim t →∞ t log(1 − (1 − ( 1 k f ) t ) g ( t ) ) ≥ log 1min P X max x ∈X P X ( N (Θ , x )) − σ + log 1 k f + σ = log 1min P X max x ∈X P X ( N (Θ , x )) + log 1 k f = log p f (Θ) k f (48)where the last equality follows from the fact that p f (Θ) = PX max x ∈X P X ( N (Θ ,x )) , which has been proved towardsthe end of Appendix F.R EFERENCES[1] C. E. Shannon, “A mathematical theory of communication,”
The Bellsystem technical journal , vol. 27, no. 3, pp. 379–423, 1948.[2] J. K¨orner, “Coding of an information source having ambiguous al-phabet and the entropy of graphs,” in , 1973, pp. 411–425.[3] L. Wang and O. Shayevitz, “Graph information ratio,”
SIAM Journalon Discrete Mathematics , vol. 31, no. 4, pp. 2703–2734, 2017.[4] G. Smith, “On the foundations of quantitative information flow,” in
International Conference on Foundations of Software Science andComputational Structures . Springer, 2009, pp. 288–302.[5] C. Braun, K. Chatzikokolakis, and C. Palamidessi, “Quantitative no-tions of leakage for one-try attacks,” 2009.[6] I. Issa, S. Kamath, and A. B. Wagner, “An operational measure ofinformation leakage,” in
Proc. Annu. Conf. Inf. Sci. Syst. (CISS) , 2016,pp. 234–239.[7] I. Issa, A. B. Wagner, and S. Kamath, “An operational approach toinformation leakage,”
IEEE Trans. Inf. Theory , 2019.[8] J. Liao, L. Sankar, F. P. Calmon, and V. Y. Tan, “Hypothesis testingunder maximal leakage privacy constraints,” in
Proc. IEEE Int. Symp.on Information Theory (ISIT) , 2017, pp. 779–783.[9] M. Karmoose, L. Song, M. Cardone, and C. Fragouli, “Privacy in indexcoding: k -limited-access schemes,” IEEE Trans. Inf. Theory , vol. 66,no. 5, pp. 2625–2641, 2019.[10] A. R. Esposito, M. Gastpar, and I. Issa, “Learning and adaptive dataanalysis via maximal leakage,” in
Proc. IEEE Information TheoryWorkshop (ITW) , 2019, pp. 1–5.[11] Y. Liu, N. Ding, P. Sadeghi, and T. Rakotoarivelo, “Privacy-utilitytradeoff in a guessing framework inspired by index coding,” in
Proc.IEEE Int. Symp. on Information Theory (ISIT) , 2020, pp. 926–931.[12] R. Zhou, T. Guo, and C. Tian, “Weakly private information retrievalunder the maximal leakage metric,” in
Proc. IEEE Int. Symp. onInformation Theory (ISIT) , 2020, pp. 1089–1094.[13] B. Wu, A. B. Wagner, and G. E. Suh, “Optimal mechanisms undermaximal leakage,” in
Proc. IEEE Conf. on Comm. and Netw. Secur.(CNS) , 2020, pp. 1–6.[14] M. S. Alvim, K. Chatzikokolakis, C. Palamidessi, and G. Smith,“Measuring information leakage using generalized gain functions,” in , 2012, pp.265–279.[15] B. Espinoza and G. Smith, “Min-entropy as a resource,”
Informationand Computation , vol. 226, pp. 57–75, 2013.[16] M. S. Alvim, K. Chatzikokolakis, A. McIver, C. Morgan,C. Palamidessi, and G. Smith, “Additive and multiplicative notions ofleakage, and their capacities,” in , 2014, pp. 308–322.[17] G. Smith, “Recent developments in quantitative information flow(invited tutorial),” in , 2015, pp. 23–31.[18] Y. Y. Shkel and H. V. Poor, “A compression perspective on secrecymeasures,” in
Proc. IEEE Int. Symp. on Information Theory (ISIT) ,2020, pp. 995–1000.[19] E. R. Scheinerman and D. H. Ullman,
Fractional graph theory: arational approach to the theory of graphs . Courier Corporation, 2011. [20] J. Liao, O. Kosut, L. Sankar, and F. du Pin Calmon, “Tunable measuresfor information leakage and applications to privacy-utility tradeoffs,”
IEEE Trans. Inf. Theory , vol. 65, no. 12, pp. 8043–8066, 2019.[21] F. Arbabjolfaei and Y.-H. Kim, “Fundamentals of index coding,”
Foundations and Trends® in Communications and Information Theory ,vol. 14, no. 3-4, pp. 163–346, 2018.[22] R. Hammack, W. Imrich, and S. Klavˇzar,
Handbook of product graphs .CRC press, 2011.[23] W. D. Blizard et al. , “Multiset theory.”
Notre Dame Journal of formallogic , vol. 30, no. 1, pp. 36–66, 1988.[24] M. Fekete, “Uber die verteilung der wurzeln bei gewissen algebrais-chen gleichungen mit ganzzahligen koeffizienten,”