Common Randomness Generation over Slow Fading Channels
aa r X i v : . [ c s . I T ] F e b Common Randomness Generation over Slow FadingChannels
Rami Ezzine, Moritz Wiese and Holger Boche
Chair of Theoretical Information TechnologyTechnical University of MunichD-80333 Munich, GermanyEmail: {rami.ezzine, wiese, boche}@tum.de
Christian Deppe
Institute for Communications EngineeringTechnical University of MunichD-80333 Munich, GermanyEmail: [email protected]
Abstract —This paper analyzes the problem of common ran-domness (CR) generation from correlated discrete sourcesaided by unidirectional communication over Single-Input Single-Output (SISO) slow fading channels with additive white Gaussiannoise (AWGN) and arbitrary state distribution. Slow fadingchannels are practically relevant for wireless communications.We completely solve the SISO slow fading case by establishing itscorresponding outage CR capacity using our characterization ofits channel outage capacity. The generated CR could be exploitedto improve the performance gain in the identification scheme. Thelatter is known to be more efficient than the classical transmissionscheme in many new applications, which demand ultra-reliablelow latency communication.
Index Terms —Common randomness, slow fading, outage ca-pacity.
I. I
NTRODUCTION
Common randomness (CR) of two terminals refers to arandom variable observable to both, with low error probability.In many models, one terminal corresponds to the senderstation and the other corresponds to the receiver station. Theavailability of this CR allows to implement correlated randomprotocols leading to developing potentially faster and moreefficient algorithms [1]. CR generation plays a major rolein sequential secret key generation [2]. In the context ofsecret key generation, CR is usually denoted by Informationreconciliation. CR is also highly relevant in the identifica-tion scheme, an approach in communications developed byAhlswede and Dueck [3] in 1989. The identification schemeis better suited than the classical transmission scheme formany new applications with high requirements on reliabilityand latency. These applications include several machine-to-machine and human-to-machine systems [4], the tactile inter-net [5], digital watermarking [6]–[8], industry 4.0 [9], molecu-lar communications [10] [11], etc. Interestingly, Ahlswede andDueck have established that the resource CR can increase theidentification capacity of channels. Thus, by taking advantageof the resource CR, an enormous performance gain can beachieved in the identification task.The problem of CR generation was initially introduced in[12], where unlike in the fundamental significant papers [13][14], no secrecy requirements are imposed. In particular, theCR capacity of a model involving two correlated sources withone-way communication over noiseless channels with limited capacity was established. The CR capacity is here defined tobe the maximum rate of CR generated by two terminals usingthe resources available in the model [12].Recently, the results on CR capacity have been extendedin [15] to SISO and point-to-point Multiple-Input Multiple-Output (MIMO) Gaussian channels, which are practicallyrelevant in many communication situations including satelliteand deep space communication links [16], wired and wire-less communications, etc. The results on CR capacity overGaussian channels have been used to establish a lower-boundon its corresponding correlation-assisted secure identificationcapacity in the log-log scale [15]. This lower bound canalready exceed the secure identification capacity over Gaussianchannels with randomized encoding elaborated in [17].However, to the best of our knowledge, there are no resultson the CR generation problem over fading channels. Thegenerated CR can be exploited in the problem of correlation-assisted identification over fading channels, which is, as far aswe know, an open problem. The phenomenon of fading is oneof the fundamental aspects in wireless communication. Fadingrefers to the deviation of a signal attenuation during wirelesspropagation with different variables such as time, rainfall, ra-dio frequency. A common model for wireless communicationis the fading channel model with additive white Gaussiannoise (AWGN) [18]–[22]. In our work, the focus will be onSISO slow fading channels with AWGN and with arbitrarystate distribution. In the slow fading scenario, the channelstate is random but remains constant over the time-scale oftransmission. The event of major interest here is outage. Thisarises when the channel state is so poor that no coding schemeis able to establish reliable communication at a certain targetrate. We consider, as a capacity measure of the slow fadingchannel, the η -outage capacity defined to be the largest rateat which one can reliably communicate over the channel withprobability greater or equal to − η [18] [19].The main contribution of this paper consists in establishingfirst the η -outage capacity of SISO slow fading channelswith AWGN and with arbitrary state distribution. Then, usingour characterization of the η -outage capacity of the slowfading channel, we derive a single-letter characterization ofits corresponding η -outage CR capacity. Paper outline:
The paper is organized as follows. In SectionI, we present our system model, provide the key definitionsand present the main results. The η -outage capacity is estab-lished in Section III. A rigorous proof of the η -outage CRcapacity is provided in Section IV. The conclusion containsconcluding remarks. Notation: C denotes the set of complex numbers and R denotes the set of real numbers; H ( · ) and h ( · ) correspondto the entropy of discrete and continuous random variables,respectively; I ( · ; · ) denotes the mutual information betweentwo random variables; |K| stands for the cardinality of the set K and T nU denotes the set of typical sequences of length n and of type P U . Throughout the paper, log and exp are to thebase 2.II. S YSTEM M ODEL , D
EFINITIONS AND M AIN R ESULTS
A. System Model
Let a discrete memoryless multiple source P XY with twocomponents, with generic variables X and Y on alphabets X and Y , correspondingly, be given. The outputs of X areobserved by Terminal A and those of Y by Terminal B . Bothoutputs have length n. Terminal A can send information toTerminal B over the following slow fading channel W G : z i = Gt i + ξ i i = 1 . . . n. where t n = ( t , . . . , t n ) ∈ C n and z n = ( z , . . . , z n ) ∈ C n are channel input and output blocks, respectively. G modelsthe complex gain with support supp ( G ) , where we assumethat both terminals A and B know the distribution of the gain G. ξ n = ( ξ , . . . , ξ n ) ∈ C n models the noise sequence. Weassume that ξ i are i.i.d such that ξ i ∼ N C (cid:0) , σ (cid:1) , i = 1 . . . n. We further assume that G and ξ n are mutually independentand that ( G, ξ n ) are independent of X n , Y n . There are no otherresources available to both terminals, as shown in Fig. 1. P XY Slow Fading channelTerminal A Terminal BK = Φ( X n ) L = Ψ( Y n , Z n ) X n Y n T n Z n Fig. 1: Two-correlated source model with one-way communi-cation over a SISO slow fading channelA CR-generation protocol of block-length n consists of:1) A function Φ that maps X n into a random variable K with alphabet K generated by Terminal A.
2) A function f that maps X n into some message ℓ = f ( X n ) .
3) A channel code Γ of length n for the channel W G as defined in Definition 3, where each codeword t ℓ =( t ℓ, , . . . , t ℓ,n ) satisfies the following power constraint: n n X i =1 t ℓ,i ≤ P. (1)The random channel input sequence is denoted by T n .
4) A function Λ that maps Y n and the decoded messageinto a random variable L with alphabet K generated byTerminal B. Such protocol induces a pair of random variable ( K, L ) thatis called permissible, where it holds for some function Ψ , for D being the channel decoder and for Z n being the randomchannel output sequence that K = Φ( X n ) , L = Ψ( Y n , Z n ) = Λ( Y n , D ( Z n )) . (2) B. Rates and Capacities
We define first an achievable η -outage CR rate and the η -outage CR capacity. This is an extension to the definition ofan achievable CR rate and of the CR capacity introduced in[12]. Definition 1.
Fix a non-negative constant η < . A number H is called an achievable η -outage common randomness rateif there exists a non-negative constant c such that for every α > and δ > and for sufficiently large n there exists apermissible pair of random variables ( K, L ) such that P [ P [ K = L | G ] ≤ α ] ≥ − η, (3) |K| ≤ exp( cn ) , (4) n H ( K ) > H − δ. (5) Remark 1.
Together with (3), the technical condition (4)ensures that for every ǫ > and sufficiently large blocklength n the set A = (cid:26) g ∈ C : (cid:12)(cid:12)(cid:12)(cid:12) H ( K | G = g ) n − H ( L | G = g ) n (cid:12)(cid:12)(cid:12)(cid:12) < ǫ (cid:27) satisfies P [ A ] ≥ − η. This follows from the analogousstatement in [12].
Definition 2.
The η -outage common randomness capacity C η,CR ( P ) is the maximum achievable η -outage common ran-domness rate. Next, we define an achievable η -outage rate for the slowfading channel W G and the corresponding η -outage capacity.For this purpose, we begin by providing the definition of atransmission-code for W G . Definition 3.
A transmission-code Γ of length n andsize k Γ k for the channel W G is a family of pairs ( t ℓ , D ℓ ) , ℓ = 1 , . . . , k Γ k} such that for all ℓ, j ∈{ , . . . , k Γ k} , we have: t ℓ ∈ C n , D ℓ ⊂ C n , n n X i =1 t ℓ,i ≤ P ∀ t ℓ ∈ C n , t ℓ = ( t ℓ, , . . . , t ℓ,n ) , D ℓ ∩ D j = ∅ , ℓ = j. The maximum error probability, as a random variable depend-ing on G , is expressed as follows: e (Γ , G ) = max ℓ ∈{ ,..., k Γ k} W G ( D cℓ | t ℓ ) . Remark 2.
Throughout the paper, we consider the maximumerror probability criterion.
Definition 4.
Let ≤ η < . A real number R is calledan achievable η -outage rate of the channel W G if for every θ, δ > there exists a code sequence (Γ n ) ∞ n =1 such that log k Γ n k n ≥ R − δ and P [ e (Γ n , G ) ≤ θ ] ≥ − η for sufficiently large n . Definition 5.
The supremum of all achievable η -outage ratesis called the η -outage capacity of the channel W G and isdenoted by C η .C. Main Results In this section, we propose a single-letter characterizationof the η -outage channel capacity in Theorem 1 and of the η -outage CR capacity in Theorem 2. Theorem 1 and Theorem2 are proved in Section III and Section IV, respectively. Theorem 1.
Let γ = sup { γ : P [ | G | < γ ] ≤ η } . The η -outagecapacity of the channel W G is equal to C η ( P ) = log (cid:18) P γ σ (cid:19) . Theorem 2.
For the model described in Section II-A, the η -outage CR capacity is equal to C η,CR ( P ) = max UU ◦− X ◦− YI ( U ; X ) − I ( U ; Y ) ≤ C η ( P ) I ( U ; X ) . III. P
ROOF OF T HEOREM γ = sup { γ : P [ | G | < γ ] ≤ η } . We will prove first the following lemma:
Lemma 1. P [ | G | < γ ] ≤ η, so the supremum actually is a maximum. Proof. Let γ n ր γ be a sequence converging to γ from theleft. Then { γ ∈ R : γ < γ } = ∞ [ n =1 { γ ∈ R : γ < γ n } . From the sigma-continuity of probability measures, it followsthat P [ | G | < γ ] = lim n P [ | G | < γ n ] ≤ η. Now, we begin with the direct proof of Theorem 1. We willshow that C η ( P ) ≥ log (cid:18) P γ σ (cid:19) . (6)Let θ, δ > . It is well-known that there exists a code sequence (Γ n ) ∞ n =1 and a blocklength n such that log k Γ n k n ≥ log (cid:18) P γ σ (cid:19) − δ and e (Γ n , γ ) ≤ θ for n ≥ n . The degradedness of the Gaussian channelsimplies that also e (Γ n , γ ) ≤ θ for n ≥ n , provided that γ ≥ γ . Therefore for n ≥ n ,Lemma 1 implies P [ e (Γ n , G ) ≤ θ ] ≥ P [ | G | ≥ γ ] = 1 − P [ | G | < γ ] ≥ − η. This implies (6).Next, we prove the converse of Theorem 1. We will showthat C η ( P ) ≤ log (cid:18) P γ σ (cid:19) . (7)Suppose this were not true. Then there exists an ε > suchthat for all θ, δ > there exists a code sequence (Γ n ) ∞ n =1 satisfying log k Γ n k n ≥ log (cid:18) P ( γ + ε ) σ (cid:19) − δ (8)and P [ e (Γ n , G ) ≤ θ ] ≥ − η (9)for sufficiently large n . The degradedness implies e (Γ n , γ ) ≥ e (Γ n , γ + ε ) for all γ ≤ γ + ε . Since δ may be arbitrary, wemay choose it in such a way that the right-hand side of (8) isstrictly larger than log(1 + ( P γ ) /σ ) . We define γ to be thesolution of the equation log(1 + ( P γ ) /σ ) = log (cid:18) P ( γ + ε ) σ (cid:19) − δ. It holds for large n that the error probability is greater than θ when | G | < γ . It follows that P [ e (Γ n , G ) > θ ] ≥ P [ | G | < γ ] > η by the definition of γ , where we used that γ > γ from thechoice of δ . This is a contradiction to (9), and so (7) must betrue. This completes the proof of Theorem 1.V. P ROOF OF T HEOREM A. Converse Proof
Let ( K, L ) be a permissible pair according to the CR-generation protocol introduced in Section II-A with powerconstraint as in (1). Let T n and Z n be the random channelinput and output sequence, respectively. We further assumethat ( K, L ) satisfies (3) (4) and (5). We are going to show for α ′ > that H ( K ) n ≤ max UU ◦− X ◦− YI ( U ; X ) − I ( U ; Y ) ≤ C η ( P )+ α ′ I ( U ; X ) . In our proof, we will use the following lemma:
Lemma 2. (Lemma 17.12 in [23]) For arbitrary randomvariables S and R and sequences of random variables X n and Y n , it holds that I ( S, X n | R ) − I ( S ; Y n | R )= n X i =1 I ( S ; X i | X . . . X i − Y i +1 . . . Y n R ) − n X i =1 I ( S ; Y i | X . . . X i − Y i +1 . . . Y n R )= n [ I ( S ; X J | V ) − I ( S ; Y J | V )] , where V = X . . . X J − Y J +1 . . . Y n RJ , with J being arandom variable independent of R , S , X n and Y n anduniformly distributed on { . . . n } . Let J be a random variable uniformly distributed on { . . . n } and independent of K , X n and Y n . We furtherdefine U = KX . . . X J − Y J +1 . . . Y n J. Notice that H ( K ) = I ( K ; X n ) ( i ) = n X i =1 I ( K ; X i | X . . . X i − )= nI ( K ; X J | X . . . X J − , J ) ( ii ) ≤ nI ( U ; X J ) , where ( i ) and ( ii ) follow from the chain rule for mutualinformation.We will show next that for α ′ > I ( U ; X J ) − I ( U ; Y J ) ≤ C η ( P ) + α ′ . Applying Lemma 2 for S = K , R = ∅ with V = X . . . X J − Y J +1 Y n RJ yields I ( K ; X n ) − I ( K ; Y n )= n [ I ( K ; X J | V ) − I ( K ; Y J | V )] ( a ) = n [ I ( KV ; X J ) − I ( K ; V ) − I ( KV ; Y J ) + I ( K ; V )] ( b ) = n [ I ( U ; X J ) − I ( U ; Y J )] , (10) where ( a ) follows from the chain rule for mutual informationand ( b ) follows from U = KV .It results using (10) that H ( K | Y n ) = H ( K ) − I ( K ; Y n ) ( c ) = H ( K ) − H ( K | X n ) − I ( K ; Y n )= I ( K ; X n ) − I ( K ; Y n )= n [ I ( U ; X J ) − I ( U ; Y J )] , (11) ( c ) K = Φ( X n ) from (2) . Let γ = sup n γ : P [ | G | < γ ] ≤ η o . We consider for ǫ > being arbitrarily small the set: Ω = n g ∈ C : P [ K = L | G = g ] ≤ α and | g | ≤ γ + ǫ o . Lemma 3. P [Ω ] > . Proof.
We start by proving the following claim:
Claim: P [ | G | ≤ γ + ǫ ] ≥ η. Suppose this were not true and that P [ | G | ≤ γ + ǫ ] < η . Let κ n = γ + ǫ + n . Since the cumulative distribution functionis right-continuous, P [ | G | ≤ γ + ǫ ] = lim n P [ | G | ≤ κ n ] < η. Thus for sufficiently large n it has to hold that P [ | G | < κ n ] ≤ P [ | G | ≤ κ n ] < η. This contradicts the definition of γ , and so the claim must betrue. We distinguish now three cases: If < η < and P [ | G | ≤ γ + ǫ ] = η : In this case, wehave using (3): − η ≤ P [ P [ K = L | G ] ≤ α ]= P [ | G | ≤ γ + ǫ ] P (cid:2) P (cid:2) K = L (cid:12)(cid:12) G (cid:3) ≤ α (cid:12)(cid:12) | G | ≤ γ + ǫ (cid:3) + P [ | G | > γ + ǫ ] P (cid:2) P (cid:2) K = L (cid:12)(cid:12) G (cid:3) ≤ α (cid:12)(cid:12) | G | > γ + ǫ (cid:3) = η P (cid:2) P (cid:2) K = L (cid:12)(cid:12) G (cid:3) ≤ α (cid:12)(cid:12) | G | ≤ γ + ǫ (cid:3) + (1 − η ) P (cid:2) P (cid:2) K = L (cid:12)(cid:12) G (cid:3) ≤ α (cid:12)(cid:12) | G | > γ + ǫ (cid:3) ≤ η P (cid:2) P (cid:2) K = L (cid:12)(cid:12) G (cid:3) ≤ α (cid:12)(cid:12) | G | ≤ γ + ǫ (cid:3) + (1 − η ) < P (cid:2) P (cid:2) K = L (cid:12)(cid:12) G (cid:3) ≤ α (cid:12)(cid:12) | G | ≤ γ + ǫ (cid:3) + (1 − η ) , where we used that η < . This means that P (cid:2) P (cid:2) K = L (cid:12)(cid:12) G (cid:3) ≤ α (cid:12)(cid:12) | G | ≤ γ + ǫ (cid:3) > . In addition, since η > , we have P (cid:2) P (cid:2) K = L (cid:12)(cid:12) G (cid:3) ≤ α, | G | ≤ γ + ǫ (cid:3) > . Thus, if < η < and P [ | G | ≤ γ + ǫ ] = η , it holds that P [Ω ] > . f < η < and P [ | G | ≤ γ + ǫ ] > η : In this case, itholds that P [ | G | ≤ γ + ǫ ] = η , where < η < η ≤ . We have using (3): − η ≤ P [ P [ K = L | G ] ≤ α ]= P [ | G | ≤ γ + ǫ ] P (cid:2) P (cid:2) K = L (cid:12)(cid:12) G (cid:3) ≤ α (cid:12)(cid:12) | G | ≤ γ + ǫ (cid:3) + P [ | G | > γ + ǫ ] P (cid:2) P (cid:2) K = L (cid:12)(cid:12) G (cid:3) ≤ α (cid:12)(cid:12) | G | > γ + ǫ (cid:3) = η P (cid:2) P (cid:2) K = L (cid:12)(cid:12) G (cid:3) ≤ α (cid:12)(cid:12) | G | ≤ γ + ǫ (cid:3) + (1 − η ) P (cid:2) P (cid:2) K = L (cid:12)(cid:12) G (cid:3) ≤ α (cid:12)(cid:12) | G | > γ + ǫ (cid:3) ≤ P (cid:2) P (cid:2) K = L (cid:12)(cid:12) G (cid:3) ≤ α (cid:12)(cid:12) | G | ≤ γ + ǫ (cid:3) + (1 − η ) < P (cid:2) P (cid:2) K = L (cid:12)(cid:12) G (cid:3) ≤ α (cid:12)(cid:12) | G | ≤ γ + ǫ (cid:3) + (1 − η ) , where we used that − η < − η. This means that P (cid:2) P (cid:2) K = L (cid:12)(cid:12) G (cid:3) ≤ α (cid:12)(cid:12) | G | ≤ γ + ǫ (cid:3) > . In addition, since η > , it implies that P (cid:2) P (cid:2) K = L (cid:12)(cid:12) G (cid:3) ≤ α, | G | ≤ γ + ǫ (cid:3) > . Thus, if < η < and P [ | G | ≤ γ + ǫ ] > η , it holds that P [Ω ] > . If η = 0 : In this case, we have γ = inf {| g | : g ∈ supp ( G ) } and it holds that P [ | G | ≤ γ + ǫ ] > . We have using (3): P [ P [ K = L | G ] ≤ α ]= P [ | G | ≤ γ + ǫ ] P (cid:2) P (cid:2) K = L (cid:12)(cid:12) G (cid:3) ≤ α (cid:12)(cid:12) | G | ≤ γ + ǫ (cid:3) + P [ | G | > γ + ǫ ] P (cid:2) P (cid:2) K = L (cid:12)(cid:12) G (cid:3) ≤ α (cid:12)(cid:12) | G | > γ + ǫ (cid:3) < P (cid:2) P (cid:2) K = L (cid:12)(cid:12) G (cid:3) ≤ α (cid:12)(cid:12) | G | ≤ γ + ǫ (cid:3) + 1 , where we used that P [ | G | > γ + ǫ ] < . This yields that P (cid:2) P (cid:2) K = L (cid:12)(cid:12) G (cid:3) ≤ α (cid:12)(cid:12) | G | ≤ γ + ǫ (cid:3) > . In addition, since P [ | G | ≤ γ + ǫ ] > , it implies that P (cid:2) P (cid:2) K = L (cid:12)(cid:12) G (cid:3) ≤ α, | G | ≤ γ + ǫ (cid:3) > . Thus, for η = 0 , it holds also that P [Ω ] > . Next, we define ˜ G to be a random variable, independent of X n , Y n and ξ n , with alphabet Ω such that for every Borelset A ⊆ C , it holds that P h ˜ G ∈ A i = P [ G ∈ A| G ∈ Ω ] . We fix the CR generation protocol and change the state dis-tribution of the slow fading channel. We obtain the followingnew channel: ˜ Z i = ˜ GT i + ξ i i = 1 . . . n, where ˜ Z n is the new output sequence. We further define ˜ L such that ˜ L = Ψ( Y n , ˜ Z n ) . Clearly, it holds that P h K = ˜ L | ˜ G = g i ≤ α ∀ g ∈ Ω , (12)and that log(1 + | g | Pσ ) ≤ log(1 + ( γ + ǫ ) Pσ ) ∀ g ∈ Ω . (13)Furthermore, using (1) and that ξ i ∼ N C (0 , σ ) , i = 1 . . . n , itholds for i = 1 . . . n that I ( T i , ˜ Z i | ˜ G = g ) ≤ log(1 + | g | Pσ ) ∀ g ∈ Ω . (14)We have: H ( K | Y n ) = H ( K | ˜ G, Y n )= H ( K | ˜ G, Y n , ˜ Z n ) + I ( K ; ˜ Z n | ˜ G, Y n ) , where we used that ˜ G is independent of ( K, Y n ) . On the onehand, we have: H (cid:16) K | ˜ Z n , ˜ G, Y n (cid:17) ( a ) ≤ H (cid:16) K | ˜ L, ˜ G (cid:17) ( b ) ≤ E h |K| P [ K = ˜ L | ˜ G ] i = 1 + log |K| E h P [ K = ˜ L | ˜ G ] i ( c ) ≤ α log |K| ( d ) ≤ α cn, where (a) follows from ˜ L = Ψ( Y n , ˜ Z n ) , (b) followsfrom Fano’s Inequality, (c) follows from (12) and from P h ˜ G ∈ Ω i = 1 and (d) follows from log |K| ≤ cn in (4).On the other hand, we have: I ( K ; ˜ Z n | ˜ G, Y n ) ≤ I ( X n K ; ˜ Z n | ˜ G, Y n ) ( a ) ≤ I ( T n ; ˜ Z n | ˜ G, Y n )= h ( ˜ Z n | ˜ G, Y n ) − h ( ˜ Z n | T n , ˜ G, Y n ) ( b ) = h ( ˜ Z n | ˜ G, Y n ) − h ( ˜ Z n | ˜ G, T n ) ( c ) ≤ h ( ˜ Z n | ˜ G ) − h ( ˜ Z n | ˜ G, T n )= I ( T n ; ˜ Z n | ˜ G ) ( d ) = n X i =1 I ( ˜ Z i ; T n | ˜ G, ˜ Z i − )= n X i =1 h ( ˜ Z i | ˜ G, ˜ Z i − ) − h ( ˜ Z i | ˜ G, T n , ˜ Z i − ) e ) = n X i =1 h ( ˜ Z i | ˜ G, ˜ Z i − ) − h ( ˜ Z i | ˜ G, T i ) ( f ) ≤ n X i =1 h ( ˜ Z i | ˜ G ) − h ( ˜ Z i | ˜ G, T i )= n X i =1 I ( T i ; ˜ Z i | ˜ G ) ( g ) ≤ n E " log(1 + | ˜ G | Pσ ) ( h ) ≤ n log(1 + ( γ + ǫ ) Pσ ) ( i ) = n ( C η ( P ) + ǫ ′ ) , with ǫ ′ being arbitrarily small, where ( a ) follows from the DataProcessing Inequality because Y n ◦− X n K ◦− ˜ GT n ◦− ˜ Z n forms a Markov chain, ( b ) follows because Y n ◦− X n K ◦− ˜ GT n ◦− ˜ Z n forms a Markov chain, ( c )( f ) follow becauseconditioning does not increase entropy, ( d ) follows fromthe chain rule for mutual information, ( e ) follows because T . . . T i − T i +1 . . . T n ˜ Z i − ◦− ˜ GT i ◦− ˜ Z i forms a Markovchain, ( g ) follows from (14) and from P h ˜ G ∈ Ω i = 1 , ( h ) follows from (13) and from P h ˜ G ∈ Ω i = 1 and ( i ) followsfrom Theorem 1 using that ǫ is arbitrarily small.This proves that for ≤ η < H ( K | Y n ) n ≤ C η ( P ) + α ′ , (15)where α ′ = n + αc + ǫ ′ > . From (11) and (15), we deduce that for ≤ η < I ( U ; X J ) − I ( U ; Y J ) ≤ C η ( P ) + α ′ , where U ◦− X J ◦− Y J . Since the joint distribution of X J and Y J is equal to P XY , H ( K ) n is upper-bounded by I ( U ; X ) subject to I ( U ; X ) − I ( U ; Y ) ≤ C η ( P ) + α ′ with U satisfying U ◦− X ◦− Y .As a result, for α ′ > , it holds that H ( K ) n ≤ max UU ◦− X ◦− YI ( U ; X ) − I ( U ; Y ) ≤ C η ( P )+ α ′ I ( U ; X ) . Here, α ′ is arbitrarily small for sufficiently large n . Thiscompletes the converse proof. B. Direct Proof
We extend the coding scheme provided in [12] to slowfading channels. By continuity, it suffices to show that C ′ η,CR ( P ) = max UU ◦− X ◦− YI ( U ; X ) − I ( U ; Y ) ≤ C ′ I ( U ; X ) is an achievable η -outage CR rate for every C ′ < C η ( P ) . Let U be a random variable satisfying I ( U ; X ) − I ( U ; Y ) ≤ C ′ .We are going to show that H = I ( U ; X ) is an achievable η -outage CR rate. Without loss of generality, assume that thedistribution of U is a possible type for block length n . Let N = exp ( n [ I ( U ; X ) − I ( U ; Y ) + 3 δ ]) N = exp ( n [ I ( U ; Y ) − δ ]) . For each pair ( i, j ) with ≤ i ≤ N and ≤ j ≤ N ,we define a random sequence U i,j ∈ U n of type P U . Eachrealization u i,j of U i,j is known to both terminals. Thismeans that N codebooks C i , ≤ i ≤ N , are known toboth terminals, where each codebook contains N sequences u i,j j = 1 . . . N .It holds for every X -typical x that P [ ∃ ( i, j ) s.t U ij ∈ T nU | X ( x ) | X n = x ] ≥ − exp( − exp( nc ′ )) , for a suitable c ′ > [12]. For K ( x ) , we choose a sequence u ij jointly typical with x (either one if there are several).Let f ( x ) = i if K ( x ) ∈ C i . If no such u i,j exists, then f ( x ) = N + 1 and K ( x ) is set to a constant sequence u different from all the u ′ ij s and known to both terminals. Usingthat C ′ < C η ( P ) , we have log k f k n = log( N + 1) n ≤ C η ( P ) − δ ′ , (16)where δ ′ is sufficiently small for n sufficiently large. Themessage i ⋆ = f ( x ) , with i ⋆ ∈ { , . . . , N + 1 } , is encodedto a sequence t using a code sequence (Γ ⋆n ) ∞ n =1 with rate log k Γ ⋆n k n = log k f k n satisfying (16) and with error probability e (Γ ⋆n , G ) satisfying: P [ e (Γ ⋆n , G ) ≤ θ ] ≥ − η, where θ is sufficiently small for sufficiently large n . From thedefinition of the η -outage capacity, we know that such a codesequence exists. The sequence t is sent over the slow fadingchannel. Let z be the channel output sequence. Terminal B decodes the message ˜ i ⋆ from the knowledge of z . Let L ( y , ˜ i ⋆ ) = u ˜ i ⋆ ,j if u ˜ i ⋆ ,j and y are UY-typical . If there isno such u ˜ i ⋆ ,j or there are several, L is set equal to u (since K and L must have the same alphabet). Now, we are goingto show that the requirements in (3) (4) and (5) are satisfied.Clearly, (4) is satisfied for c = 2( H ( X ) + 1) , n sufficientlylarge: |K| = N N + 1= exp( n [ I ( U ; X ) + δ ]) + 1 ≤ exp(2 n [ H ( X ) + 1]) . We define next for a fixed u i,j the set Ω = { x ∈ X n s.t. ( x , u i,j ) jointly typical } . s shown in [12], it holds that P [ K = u i,j ]= X x ∈ Ω P [ K = u i,j | X n = x ] P nX ( x )+ X x ∈ Ω c P [ K = u i,j | X n = x ] P nX ( x ) ( i ) = X x ∈ Ω P [ K = u i,j | X n = x ] P nX ( x ) ≤ X x ∈ Ω P nX ( x )= P nX ( { x : ( x , u i,j ) jointly typical } )= exp ( − nI ( U ; X ) + o ( n )) , where (i) follows because for ( x , u i,j ) being not jointlytypical, we have P [ K = u i,j | X n = x ] = 0 . This yields H ( K ) ≥ nI ( U ; X ) + o ( n )= nH + o ( n ) . Thus, (5) is satisfied. Now, it remains to prove that (3) issatisfied. Let M = U . . . U N N . We define the followingtwo sets which depend on M : S ( M ) = { ( x , y ) : ( K ( x ) , x , y ) ∈ T nUXY } and S ( M ) = { ( x , y ) : ( x , y ) ∈ S ( M ) s.t. ∃ U iℓ = U ij = K ( x ) jointly typical with y ( with the same first index i ) } . It is proved in [12] that E M [ P nXY ( S c ( M )) + P nXY ( S ( M ))] ≤ β, (17)where β is exponentially small for sufficiently large n . Remark 3. P nXY ( S c ( M )) and P nXY ( S ( M )) are here randomvariables depending on M . We choose a realization m = u . . . u N N satisfying: P nXY ( S c ( m )) + P nXY ( S ( m )) ≤ β. From (17), we know that such a realization exists. Now, wedefine the following event: D m = “ K ( X n ) is equal to none of the u ′ i,j s ” . We further define I ⋆ = f ( X n ) to be the random variablemodeling the message encoded by Terminal A and ˜ I ⋆ to be therandom variable modeling the message decoded by Terminal B . We have: P [ K = L | G ] = P [ K = L | G, I ⋆ = ˜ I ⋆ ] P [ I ⋆ = ˜ I ⋆ | G ]+ P [ K = L | G, I ⋆ = ˜ I ⋆ ] P [ I ⋆ = ˜ I ⋆ | G ] ≤ P [ K = L | G, I ⋆ = ˜ I ⋆ ] + P [ I ⋆ = ˜ I ⋆ | G ] . Here: P [ K = L | G, I ⋆ = ˜ I ⋆ ]= P [ K = L | G, I ⋆ = ˜ I ⋆ , D m ] P [ D m | G, I ⋆ = ˜ I ⋆ ]+ P [ K = L | G, I ⋆ = ˜ I ⋆ , D c m ] P [ D c m | G, I ⋆ = ˜ I ⋆ ] ( i ) = P [ K = L | G, I ⋆ = ˜ I ⋆ , D c m ] P [ D c m | G, I ⋆ = ˜ I ⋆ ] ≤ P [ K = L | G, I ⋆ = ˜ I ⋆ , D c m ] , where ( i ) follows from P [ K = L | G, I ⋆ = ˜ I ⋆ , D m ] = 0 , sinceconditioned on G , I ⋆ = ˜ I ⋆ and D m , we know that K and L are both equal to u . It follows that P [ K = L | G ] ≤ P [ K = L | G, I ⋆ = ˜ I ⋆ , D c m ] + P [ I ⋆ = ˜ I ⋆ | G ] ≤ P nXY ( S c ( m ) ∪ S ( m )) + P [ I ⋆ = ˜ I ⋆ | G ] ( a ) = P nXY ( S c ( m )) + P nXY ( S ( m )) + P [ I ⋆ = ˜ I ⋆ | G ] ≤ β + P [ I ⋆ = ˜ I ⋆ | G ] , where ( a ) follows because S c ( m ) and S ( m ) are disjoint.As a result, we have: P h I ⋆ = ˜ I ⋆ | G i ≤ θ = ⇒ P [ K = L | G ] ≤ β + θ. By choosing α = β + θ , we have: P h I ⋆ = ˜ I ⋆ | G i ≤ θ = ⇒ P [ K = L | G ] ≤ α. Thus: P [ P [ K = L | G ] ≤ α ] ≥ P h P h I ⋆ = ˜ I ⋆ | G i ≤ θ i ≥ − η. Here, α is arbitrarily small for sufficiently large n . Thiscompletes the direct proof.V. C ONCLUSION
In this paper, we have examined the problem of commonrandomness generation over slow fading channels for theirpractical relevance in many situations in wireless communica-tions. The generated CR can be exploited in the identificationscheme to improve the performance gain. We established asingle-letter characterization of the outage CR capacity overslow fading channels with AWGN and with arbitrary statedistribution using our characterization of its correspondingchannel outage capacity. As a future work, it would beinteresting to study the problem of CR generation over single-user MIMO slow fading channels since it is known that, com-pared to SISO systems, point-to-point MIMO communicationsystems offer higher rates, more reliability and resistance tointerference. Future research might also focus on studying theproblem of CR generation over fast fading channels.
CKNOWLEDGMENTS
We thank the German Research Foundation (DFG) withinthe Gottfried Wilhelm Leibniz Prize under Grant BO 1734/20-1 for their support of H. Boche and M. Wiese. Thanksalso go to the German Federal Ministry of Education andResearch (BMBF) within the national initiative for “PostShannon Communication (NewCom)” with the project “Ba-sics, simulation and demonstration for new communicationmodels” under Grant 16KIS1003K for their support of H.Boche, R. Ezzine and with the project “Coding theory andcoding methods for new communication models” under Grant16KIS1005 for their support of C. Deppe. Further, we thankthe German Research Foundation (DFG) within Germany’sExcellence Strategy EXC-2111—390814868 and EXC-2092CASA - 390781972 for their support of H. Boche and M.Wiese. R
EFERENCES[1] M. Sudan, H. Tyagi, and S. Watanabe, “Communication for generatingcorrelation: A unifying survey,”
IEEE Transactions on InformationTheory , vol. 66, no. 1, pp. 5–37, 2020.[2] M. Bloch and J. Barros,
Physical-Layer Security: From InformationTheory to Security Engineering . Cambridge University Press, 2011.[3] R. Ahlswede and G. Dueck, “Identification via channels,”
IEEE Trans-actions on Information Theory , vol. 35, no. 1, pp. 15–29, 1989.[4] H. Boche and C. Deppe, “Secure identification for wiretap channels;robustness, super-additivity and continuity,”
IEEE Transactions on In-formation Forensics and Security , vol. 13, no. 7, pp. 1641–1655, 2018.[5] G. P. Fettweis, “The tactile internet: Applications and challenges,”
IEEEVehicular Technology Magazine , vol. 9, no. 1, pp. 64–70, 2014.[6] P. Moulin, “The role of information theory in watermarking andits application to image watermarking,”
Signal Processing
Watermarking Identification Codes withRelated Topics on Common Randomness . Berlin, Heidelberg: SpringerBerlin Heidelberg, 2006, pp. 107–153.[8] Y. Steinberg and N. Merhav, “Identification in the presence of sideinformation with application to watermarking,”
IEEE Transactions onInformation Theory , vol. 47, no. 4, pp. 1410–1422, 2001.[9] Y. Lu, “Industry 4.0: A survey on technologies, applications and openresearch issues,”
Journal of Industrial Information Integration , vol. 6,pp. 1 – 10, 2017.[10] S. F. Bush, J. L. Paluh, G. Piro, V. Rao, R. V. Prasad, and A. Eck-ford, “Defining communication at the bottom,”
IEEE Transactions onMolecular, Biological and Multi-Scale Communications , vol. 1, no. 1,pp. 90–96, 2015.[11] W. Haselmayr, A. Springer, G. Fischer, C. Alexiou, H. Boche, P. Hoeher,F. Dressler, and R. Schober, “Integration of molecular communicationsinto future generation wireless networks,” 2019.[12] R. Ahlswede and I. Csiszar, “Common randomness in information theoryand cryptography. II. CR capacity,”
IEEE Transactions on InformationTheory , vol. 44, no. 1, pp. 225–240, 1998.[13] ——, “Common randomness in information theory and cryptography.I. secret sharing,”
IEEE Transactions on Information Theory , vol. 39,no. 4, pp. 1121–1132, 1993.[14] U. M. Maurer, “Secret key agreement by public discussion from commoninformation,”
IEEE Transactions on Information Theory , vol. 39, no. 3,pp. 733–742, 1993.[15] R. Ezzine, W. Labidi, H. Boche, and C. Deppe, “Common randomnessgeneration and identification over gaussian channels,” in
GLOBECOM2020 - 2020 IEEE Global Communications Conference , 2020, pp. 1–6.[16] D. D. N. Bevan, V. T. Ermolayev, A. G. Flaksman, I. M. Averin,and P. M. Grant, “Gaussian channel model for macrocellular mobilepropagation,” in ,2005, pp. 1–4. [17] W. Labidi, C. Deppe, and H. Boche, “Secure identification for Gaussianchannels,” in
ICASSP 2020 - 2020 IEEE International Conference onAcoustics, Speech and Signal Processing (ICASSP) , 2020, pp. 2872–2876.[18] A. Goldsmith,
Wireless Communications . Cambridge University Press,2005.[19] D. Tse and P. Viswanath,
Fundamentals of Wireless Communication .New York, NY, USA: Cambridge University Press, 2005.[20] E. Biglieri, J. Proakis, and S. Shamai, “Fading channels: information-theoretic and communications aspects,”
IEEE Transactions on Informa-tion Theory , vol. 44, no. 6, pp. 2619–2692, 1998.[21] L. H. Ozarow, S. Shamai, and A. D. Wyner, “Information theoretic con-siderations for cellular mobile radio,”
IEEE Transactions on VehicularTechnology , vol. 43, no. 2, pp. 359–378, 1994.[22] X. Yang, “Capacity of fading channels without channel sideinformation,”
CoRR , vol. abs/1903.12360, 2019. [Online]. Available:http://arxiv.org/abs/1903.12360[23] I. Csiszár and J. Körner,