Identification over the Gaussian Channel in the Presence of Feedback
aa r X i v : . [ c s . I T ] F e b Identification over the Gaussian Channel in thePresence of Feedback
Wafa Labidi, Holger Boche and Moritz Wiese
Chair of Theoretical Information TechnologyTechnical University of MunichD-80333 Munich, GermanyEmail: { wafa.labidi, boche, wiese } @tum.de Christian Deppe
Institute for Communications EngineeringTechnical University of MunichD-80333 Munich, GermanyEmail: [email protected]
Abstract —We analyze message identification via Gaussianchannels with noiseless feedback, which is part of the PostShannon theory. The consideration of communication systemsbeyond the Shannon’s approach is useful in order to increase theefficiency of information transmission for certain applications.We consider the Gaussian channel with feedback. If the noisevariance is positive, we propose a coding scheme that generatesinfinite common randomness between the sender and the receiverand show that any rate for identification via the Gaussian channelwith noiseless feedback can be achieved. The remarkable resultis that this applies to both rate definitions n log M (as Shannondefined it for the transmission) and n log log M (as defined byAhlswede and Dueck for identification). We can even show thatour result holds regardless of the selected scaling for the rate. Index Terms —identification theory, feedback, common ran-domness
I. I
NTRODUCTION
New applications in modern communications are demandingrobust and ultra-reliable low latency information exchangesuch as machine-to-machine and human-to-machine commu-nications [1], the tactile internet [2], digital watermarking [3],[4], [5], molecular communication [6], online sales, healthcare, industry 4.0, etc. For many of these applications, theidentification approach suggested by Ahlswede and Dueck [7]in 1989 is much more efficient than the classical transmissionscheme proposed by Shannon [8]. In the classical messagetransmission scheme, the encoder transmits a message over achannel and at the receiver side, the decoder aims to estimatethis message based on the channel observation. This is notthe case for identification. In the identification scheme, theencoder sends an identification message (also called iden-tity) over the channel and the decoder is not interested in what the received message is, but wants to know whether aspecific message has been sent or not. Naturally, the senderhas no knowledge which message the receiver is interestedin. The identification problem can be regarded as a testingof many hypothesis test problems occurring simultaneously.One of the main results in the theory of identification [7]for Discrete Memoryless Channels (DMCs) is that the sizeof identification codes grows doubly exponentially fast withthe blocklength, if randomized encoding is allowed. This isdifferent from the classical transmission, where the number ofmessages that can be reliably communicated over the channel is exponential in the blocklength. Randomized encoding isessential for identification to achieve the double exponentialgrowth. In the deterministic setup, the number of messagesthat we can identify over a DMC scales only exponentiallywith the blocklength. However, in the case of deterministicencoding, the rate is still larger than the transmission rate inthe exponential scale [9], [10], [11]. Apart from this gain,other communications scenarios such as correlation-assistedidentification [12], secure correlation-assisted identification[13] as well as identification in the presence of feedback [14],[15] show that the identification capacity has a completelydifferent behavior than the task of Shannon capacity.It has been shown in [16] that the capacity of a DMC isnot increased by the availability of a feedback channel, eventhough the latter be noiseless and have unlimited capacity.However, it can help greatly in reducing the complexity ofencoding or decoding. A simple code construction for a DMCwith feedback was investigated in [17]. Furthermore, it hasbeen proved in [18], [19], [20] that feedback increases thecapacities of discrete memoryless multiple-access channels aswell as discrete memoryless broadcast channels. The authorsin [21] pointed out that the noiseless feedback can be used togenerate a secret key shared only between the transmitter andthe legitimate receiver.Turning again to identification, the authors studied in [14]the identification problem over a DMC in the presence ofnoiseless feedback and showed that feedback allows, even inthe deterministic case, the number of identities to grow doublyexponentially in the blocklength. This result follows fromthe fact that the identification capacity of the simplest chan-nels, namely DMCs, coincides with the capacity of commonrandomness [21]. The feedback allows to set up a commonrandomness experiment between the sender and the receiver.This amount of correlated randomness or the size of therandom experiment determines the growth of the identificationcapacity. Previous work on identification in the presence offeedback focused on channels with finite input and outputalphabets. Identification via arbitrarily varying channels (AVC)with noiseless feedback was investigated in [22]. Identificationover discrete multi-way channels with complete feedback waspresented in [23]. In [24], the authors established a unifiedtheory of identification via channels with finite input andutput alphabets in the presence of noisy feedback. In addition,secure identification capacity over the discrete wiretap channelin the presence of secure feedback was studied in [21].Only a few studies [25], [26], [10], [27], [13] have exploredidentification for continuous alphabets. Although many re-searchers are now addressing the problem of identification withfeedback, no results have yet been established for continuousalphabets in the presence of feedback. The transition fromthe discrete case to the continuous one is not obvious. Weare concerned with the Gaussian channel for its practicalrelevance in wired and wireless communications, satelliteand deep space communication links, etc. We completelysolve the Gaussian case by determining the correspondingidentification capacity in the presence of noiseless feedback,for the case of deterministic encoding. We show that allpositive identification rates are achievable, provided that thenoise variance is positive. This raises the question whether anappropriate scale, not only the double-exponential scale, existssuch that the identification capacity is finite, as shown in [10]for deterministic identification over fading channels. We provethat, surprisingly, for a positive error probability and startingfrom a certain blocklength, all positive rates are achievableregardless of the scale. Our proposed coding scheme allowsthe generation of infinite common randomness between thesender and the receiver in our model in one step.In Section II, we recall some basic definitions and knownresults about classical transmission and identification overDMCs in the presence of noiseless feedback. In Section III,we introduce our channel model, provide a coding scheme thatgenerates infinite common randomness between the sender andthe receiver in our model and prove the main result of thepaper. Section IV contains concluding remarks and proposespotential future research in this field.II. P
RELIMINARIES
In this section, we introduce the notation that will beused throughout the paper. We recall some basic definitionsand known results about classical transmission as well asidentification over DMCs in the presence of noiseless feedbackchannel.
A. Notation
Calligraphic letters X , Y , Z , . . . are used for finite or infinitesets; lowercase letters x, y, z, . . . stand for constants and valuesof random variables; uppercase letters X, Y, Z, . . . stand forrandom variables; R denotes the sets of real numbers; H ( · ) denotes the entropy; |X | denotes the cardinality of a finite set X ; the set of probability distributions on the set X is denotedby P ( X ) ; all logarithms and information quantities are takento the base . B. Definitions and Auxiliary Results
In this section, we recall auxiliary results about classicaltransmission and identification over a DMC in the presence ofnoiseless feedback. We define a DMC in the following.
Definition 1.
A discrete memoryless channel is atriple ( X , Y , W ) , where X and Y are finite setsdenoted as input and output alphabet, respectively, and W = { W ( y | x ) : x ∈ X , y ∈ Y} is a stochastic matrix. Theprobability for a sequence y n = ( y , . . . , y n ) ∈ Y n to bereceived if x n = ( x , . . . , x n ) ∈ X n was sent is defined by W n ( y n | x n ) = n Y t =1 W ( y t | x t ) , where n is the number of channel uses. Now, we formally describe feedback strategies and trans-mission feedback codes for DMCs.
Definition 2. An ( n, M ; λ ) transmission feedback code { ( f i , D i ) , i = 1 , . . . , M } with λ ∈ (0 , is characterizedas follows. The sender wants to transmit one message i ∈M := { , . . . , M } . The message i ∈ M is encoded by thevector-valued function f i = [ f i , f i . . . , f ni ] , (1) where f i ∈ X f i : Y −→ X ... f ni : Y n − −→ X . At t = 1 the sender transmits f i . At t ∈ { , . . . , n } , thesender transmits f ti ( Y , . . . , Y t − ) after Y , . . . , Y t − havebeen made known to the sender via the noiseless feedbackchannel. The decoding sets D i ⊂ Y n , i ∈ { , . . . , M } shouldbe disjoint and satisfy the inequality W n ( D i | f i ) ≥ − λ, ∀ i ∈ { , . . . , M } . The following result was proved by Shannon in [16]. Aconstructive proof was presentend by Ahlswede in [17].
Theorem 3 ([16],[17]) . Let W be a DMC, M f ( n, λ ) themaximal number such that an ( n, M, λ ) feedback transmissioncode for W exists and let C ( W ) be the Shannon capacity of W . Then C f ( W ) , lim n →∞ n log M ( n, λ ) = C ( W ) for λ ∈ (0 ,
12 ) , where C f ( W ) denotes the feedback capacity of the DMC W . Now, let us recall the basic results for the problem of iden-tification. Ahlswede and Dueck studied in [14] the problem ofidentification for DMCs in the presence of noiseless feedbackchannel and determined the second order capacity C IDf fordeterministic encoding strategies.
Definition 4.
Let F n be the set of all encoding functions f i for i ∈ { , . . . , N } defined as in (1) . A deterministic ( n, N, λ , λ ) identification feedback code with λ + λ < or a DMC W is a family of pairs { ( f i , D i ) , i = 1 , . . . , N } with f i = [ f i , f i . . . , f ni ] ∈ F n , D i ⊂ Y n , ∀ i ∈ { , . . . , N } such that for we have: µ ( i )1 , W n ( D ci | f i ) ≤ λ ∀ i,µ ( i,j )2 , W n ( D j | f i ) ≤ λ ∀ i = j. Definition 5. The rate R of a deterministic ( n, N, λ , λ ) identification feedback code for aDMC W is R = log log( N ) n bits. The identification rate R for W is said to be achievableif for λ ∈ (0 , there exists an n ( λ ) , such that for all n ≥ n ( λ ) there exists an ( n, nR , λ, λ ) deterministicidentification code for W . The deterministic feedback identification capacity C IDf ( W ) of a DMC W is the supremum of all achiev-able rates. Feedback does not increase the transmission capacity forstationary memoryless channels [16], [17]. However, it doesincrease the identification capacity [14].
Theorem 6.
Let N ( n, λ ) be the maximal number such thatan ( n, N, λ , λ ) deterministic feedback identification code forthe DMC W exists with λ , λ ≤ λ and λ ∈ (0 , ) and C ( W ) the Shannon capacity of W . Then C IDf ( W ) = ( max x ∈X H ( W ( ·| x )) , if C(W) > (2) , iff W noiseless or C(W)=0. (3) Remark 7.
Note that, if the DMC W is noiseless, then max x ∈X H ( W ( ·| x )) is also equal to zero. In this case, thecapacity formula in (2) also holds true. However, even whenthe channel capacity C ( W ) is zero, max x ∈X H ( W ( ·| x )) might be positive. Thus, in this case, the deterministic feedbackidentification capacity C IDf ( W ) is determined by (3) . Note that feedback allows, even with deterministic encod-ing, a double exponential growth of the number of identities.If, in addition, randomized encoding is allowed, a furtherincrease of the identification capacity can be achieved. Itis worth noting that the deterministic identification capacity C IDf is determined in terms of noise entropy. The capacityformula in Theorem 6 C IDf ( W ) can be viewed as the measureof noise caused by the input x ∈ X .III. D ETERMINISTIC I DENTIFICATION FOR G AUSSIAN C HANNELS WITH N OISELESS F EEDBACK
In this section, we present and prove the main result of thepaper. We first define the channel model, the Gaussian channelwith noiseless feedback depicted in Fig. 1. We then present anoptimal coding scheme and determine the corresponding de-terministic feedback identification capacity. A brief discussionof the main result follows at the end of this section.
A. System Model and Main Result
Definition 8.
The discrete-time additive white Gaussian noisechannel with noiseless feedback link is denoted by W f ( g, P ) .The sender wants to send an identification message M overthe forward additive white Gaussian noise channel: Y t = X t + Z t , t ∈ { , . . . , n } , where n is the number of channel uses, X n = ( X , . . . , X n ) ∈ R n and Y n = ( Y , . . . , Y n ) ∈ R n denote the channel inputand the channel output, respectively. Z t , t = 1 , . . . , n areindependent and identically distributed (i.i.d). Each noisesample Z t , i = 1 , . . . , n is drawn from a normal distributiondenoted by g with zero mean and variance σ . Let further ¯ Y n = ( ¯ Y , . . . , ¯ Y n ) ∈ R n denote the output of the noiselessbackward (feedback) channel. ¯ Y t = Y t , t ∈ { , . . . , n } , We extend the definition of an identification feedback codein [14] to the Gaussian channel W f ( g, P ) . Definition 9.
Let F n,P be the set of all encoding functions f i for i ∈ { , . . . , N } defined as in (1) and satisfying thefollowing power constraint: n X t =1 ( f ti ) ≤ n · P, ∀ i ∈ { , . . . , N } . A deterministic ( n, N, λ , λ ) identification feedback codewith λ + λ < for a Gaussian channel W f ( g, P ) is afamily of pairs { ( f i , D i ) , i = 1 , . . . , N } with f i = [ f i , f i . . . , f ni ] ∈ F n,P , D i ⊂ Y n , ∀ i ∈ { , . . . , N } µ ( i )1 , W n ( D ci | f i ) ≤ λ ∀ i, (4) µ ( i,j )2 , W n ( D j | f i ) ≤ λ ∀ i = j. (5) The sender sends f ti ( ¯ Y , . . . , ¯ Y t − ) . µ ( i )1 described in (4) is called error of the first kind, which isthe probability that the decoder didn’t decide for the message i when i was actually sent. The error of the first kind is producedby the channel noise as defined in a transmission code. µ ( i,j )2 described in (5) is called error of the second kind, which is theprobability that the decoder decided for the message i whenanother message j was actually sent. The error of the secondkind results from the identification-code construction, i.e., theoverlapping of the decoding sets, and from a transmissionerror. In other words, an error of the first kind can lead to anerror of the second kind. Even when there is no transmissionerror, the error of the second kind can be positive. Definition 10. The rate R of a deterministic ( n, N, λ , λ ) identification feedback code for aGaussian channel W f ( g, P ) is R = log log( N ) n bits. The identification rate R for W f ( g, P ) is said to beachievable if for λ ∈ (0 , there exists an n ( λ ) , suchthat for all n ≥ n ( λ ) there exists an ( n, nR , λ, λ ) deterministic identification code for W . Encoder + Decoder Is ˆ M sent? Yes or No? Z t X t = f tM ( Y t − ) Y t Fig. 1: Discrete-time Gaussian channel with noiseless feedback3)
The deterministic feedback identification capacity C IDf ( g, P ) of a Gaussian channel W ( g, P ) is thesupremum of all achievable rates. Now, we present the main result of this paper.
Theorem 11.
Let λ ∈ (0 , , R > , σ > and P > .Then there exists a blocklength n such that for every n ≥ n there exists a deterministic identification feedback code for W f ( g, P ) of blocklength n with N = 2 nR identities and with λ , λ ≤ λ , i.e, C IDf ( g, P ) = + ∞ . In the classical message transmission setup, the numberof messages that can be reliably sent over a DMC scalesexponentially with the blocklength n , i.e., ∼ nR . For iden-tification with randomized encoding over DMCs, the sizeof identification codes grows doubly exponentially fast inthe blocklength n , i.e., ∼ nR . However, it was shown in[11], [9] that the identification capacity of a DMC, in thecase of deterministic encoding and under input constraints,is finite in the exponential scale and consequently zero inthe double exponential scale. Note that in general, if thecapacity in an exponential scale is finite, then it is zero inthe double exponential scale, and conversely, if the capacityin the double exponential scale is positive, then it is infinitein the exponential scale. It was proved in [9] that the iden-tification capacity of the Gaussian channel with deterministicencoding is infinite in the exponential scale and zero in thedouble exponential scale. That means that the correspondingcapacity grows super-exponentially. This raises the question,whether we can define another scale such that the identificationcapacity of the Gaussian channel is positive and finite . Theauthors in [9] demonstrated that, for the Gaussian channelwith deterministic encoding, the number of identities scales as n nR in the blocklength n . In Theorem 13, we extend our resultfrom Theorem 11 to other scaling. In other words, we provethat the deterministic identification capacity over the Gaussianchannel with noiseless feedback remains infinite regardless ofthe scaling. Definition 12. Let ϕ be an arbitrary monotonicallyincreasing function ϕ : N → R . A rate R ϕ is achievableif for λ ∈ (0 , every ϕ and every n ≥ n s there exists adeterministic identification feedback code ( n, N, λ , λ ) for W f ( g, P ) with λ , λ ≤ λ and R ϕ = ϕ ( N ) n . The deterministic feedback identification capacity C ϕIDf ( g, P ) of a Gaussian channel W ( g, P ) is thesupremum of all achievable rates. Theorem 13.
Let ϕ be an arbitrary monotonically increasingfunction ϕ : N → R , λ ∈ (0 , , σ > and P > . Thenthere exists a blocklength n s such that for every n ≥ n s there exists a deterministic identification feedback code for W f ( g, P ) of blocklength n with N = ϕ − ( nR ϕ ) identitiesand with λ , λ ≤ λ , i.e, C ϕIDf ( g, P ) = + ∞ .B. Optimal Coding Scheme for Identification in the Presenceof Feedback We want to show that our coding scheme achieves infi-nite deterministic feedback identification capacity over theGaussian channel, provided that the noise variance is pos-itive and the maximum average power P is positive. Thisraises the question whether an appropriate scale exists suchthat the identification capacity is finite as for deterministicidentification over fading channels in [10]. Surprisingly, wecan increase the scaling without paying any price for the errorof the second kind. We start by generating infinite commonrandomness between the sender an receiver. We then constructan optimal identification feedback code for our model depictedin Fig. 1. Furthermore, we extend our main result to otherrate scaling. We prove that for positive noise variance, we canachieve infinite identification feedback capacity regardless ofthe scaling.
1) Common Randomness Generation:
We denote by Y thechannel output with distribution W ( ·| x ∗ = 0) on Y . Y isnormally distributed with zero mean and variance E ( Y ) = σ . We denote by Φ the cumulative distribution function ofthe standard normal distribution X ∼ N (0 , . Φ : R → (0 , x Pr { X ≤ x } Φ is continuous and strictly monotone over R (one-to-one function). Thus, an inverse function Φ − exists over Φ (( −∞ , ∞ )) = (0 , . Lemma 14.
Let for σ > the RV ˜ Y be defined as ˜ Y =Φ( Y √ σ ) . ˜ Y is uniformly distributed on (0 , .Proof. Let F (˜ y ) be the cumulative distribution function of ˜ Y ,hen F ˜ Y (˜ y ) , Pr { ˜ Y ≤ ˜ y } = Pr { Φ( Y √ σ ) ≤ ˜ y } = Pr { Y √ σ ≤ Φ − (˜ y ) } = Pr { X ≤ Φ − (˜ y ) } = Φ (cid:0) Φ − (˜ y ) (cid:1) = ˜ y. As ˜ Y is defined on Φ (( −∞ , ∞ )) = (0 , , it is uniform on (0 , .We now discretize ˜ Y = (0 , as follows. Let L = { ˜ y , ˜ y , . . . , ˜ y L ( m ) − , ˜ y L ( m ) } be a finite set and ˜ y l = lL ( m ) , l ∈ { , . . . , L ( m ) } , m > where L ( m ) denotes the quantization depth. We can rewrite Y = R as follows. Y = (Φ(˜ y ) = −∞ , Φ(˜ y )] ∪ (Φ(˜ y ) , Φ(˜ y )] ∪ · · ·∪ (Φ(˜ y L ( m ) − ) , Φ(˜ y L ( m ) ) = ∞ ) Let U be a RV with uniform distribution on D ∗ = { , . . . , l, . . . , L ( m ) } defined as follows. U = l, iff Φ(˜ y l − ) < Y ≤ Φ(˜ y l ) . We have proved the existence of a function k that converts theRV Y to a RV k ( Y ) uniformly distributed on D ∗ . k : R → D ∗ : Y U It is worth noting that we do not pay any price for theuniformity. We can convert a random experiment with aGaussian distribution to another one with uniform distributionwith zero error probability.
2) Proof of Theorem 11:
In this section, we provide a prooffor Theorem 11. We choose as blocklength n = 1 + m .We send one symbol x ∗ = 0 over the forward channel.We use the feedback channel to generate from the noiseinfinite common randomness between the sender and thereceiver as presented in Section III-B1. We then concatenatethe symbol x ∗ and an ( m, M, − mλ ) transmission code usinga set of maps { F i , i = 1 , . . . , N } to build an ( n, N, λ , λ ) identification feedback code for W f ( g, P ) . In order to achievea small error of the second kind, we can choose suitable maps { F i , i = 1 , . . . , N } randomly. Lemma 15 implies Lemma16. Lemma 16 is used to derive a lower bound on the errorof the second kind. We prove that the probability that ourconstructed code is an ( n, N, λ , λ ) identification feedbackcode is positive. We then show that the capacity C IDf ( g, P ) is infinite.In the following, we assume that |D ∗ | = L ( m ) =2 m R , R > . This quantity is the size of the correlatedrandom experiment ( D ∗ , k ( Y )) , which can be used to computethe growth of the identification rate. Let C = { ( u j , D j ) , j = 1 , . . . , M } be an ( m, M, − mδ ) -code, where each codeword satisfies the following averagepower constraint: P Mj =1 ( u j ) ≤ mP . The concatenation isdone using the maps, also called coloring functions, { F i , i =1 , . . . , N } . An example of these coloring functions is depictedin Fig. 2. Every coloring function F i : D ∗ −→ { , . . . , M } corresponds to an identification message i and maps eachelement k ( y ) ∈ D ∗ to an element F i ( k ( y )) in a smallerset { , . . . , M } . After y ∈ Y has been received via thenoiseless feedback channel, the sender sends u F i ( k ( y )) , if i ∈ { , . . . , N } is given to him. Note that we defined foreach coloring function F i an encoding strategy f i as definedin (1). For a fixed family of maps { F i , i = 1 , . . . , N } and for each i ∈ { , . . . , N } , we define the decoding sets D ′ i = D ( F i ) = S k ( y ) ∈D ∗ { k ( y ) } × D F i ( k ( y )) .We now analyze the maximal error performances of thedeterministic feedback identification code C ′ = { ( f i , D ′ i ) , i =1 , . . . , N } . For the analysis of the error of the first kind, wechoose a fixed set { F i , i = 1 , . . . , N } because we prove laterthat the error of the first kind is independent of the coloringfunctions’ choice. To analyze the error of the second kind,we choose the set of coloring functions randomly each i ∈{ , . . . , N } , the error of the first kind can be upper-boundedas follows. W nf ( D ′ ci | f i ) ( a ) ≤ Z y ∈ R g ( y ) W mf (cid:0) ( D F i ( k ( y )) ) c | u F i ( k ( y )) (cid:1) dy ( b ) ≤ − mδ , (6)where the noise distribution g is Gaussian with zero mean andvariance σ . ( a ) follows from the memorylessness propertyof the channel and the union bound and ( b ) follows from thedefinition of the code C and from the fact that we generatethe correlated random experiment ( D ∗ , k ( Y )) with zero errorprobability.In order to achieve a small error of the second kind, wecan choose suitable maps { F i , i = 1 , . . . , N } randomly. For i ∈ { , . . . , N } , y ∈ Y and l = k ( y ) ∈ D ∗ , let ¯ F i ( l ) beindependent RVs such that Pr { ¯ F i ( l ) = j } = 1 M , j ∈ { , . . . , M } (7)Let F be a realization of ¯ F . For each l ∈ D ∗ , we define theRVs ψ l = ψ l ( ¯ F ) as follows. ψ l = ψ l ( ¯ F ) = ( , if F ( l ) = ¯ F ( l )0 , otherwise .ψ l are also independent for every l ∈ D ∗ . The expectation of ψ l is computed as follows. E [ ψ l ] = Pr { F ( l ) = ¯ F ( l ) } = 1 M (8)Since for all l ∈ D ∗ , ψ l are iid, we can apply the followinginequality. emma 15. [28] Let { X i } be iid RV’s taking values in [0 , with mean µ , then ∀ c > with p = µ + c ≤ , Pr { ¯ X n − µ ≥ c } ≤ exp( − nD ( p || µ )) ≤ exp( − nc ) where ¯ X n = n P ni =1 X i Inequality 15 implies the following Lemma.
Lemma 16.
For λ ∈ (0 , , M < λ , Pr { X l ∈D ∗ ψ l > |D ∗ | · λ } ≤ −|D ∗ | ( λmǫ ) As lim m →∞ −|D ∗ | ( λmǫ ) = 0 , we have for λ ∈ (0 , and λ > M Pr { X l ∈D ∗ ψ l ≤ |D ∗ | · λ } > We now derive an upper bound on the error of the secondkind W nf ( D ( ¯ F ) | f ) for those values of ¯ F satisfying Lemma16. W n ( D ( ¯ F ) | f ) ( a ) ≤ Z y ∈ R g ( y ) W mf (cid:0) ( D F ( k ( y )) ) c | u F ( k ( y )) (cid:1) dy = Z k ( y ) ∈D ∗ F ( k ( y )) = ¯ F ( k ( y )) g ( y ) W mf (cid:0) ( D F ( k ( y )) ) c | u F ( k ( y )) (cid:1) dy + Z k ( y ) ∈D ∗ F ( k ( y ))= ¯ F ( k ( y )) g ( y ) W mf (cid:0) ( D F ( k ( y )) ) c | u F ( k ( y )) (cid:1) dy ( b ) ≤ − mδ + λ. (9) ( a ) follows from the union bound and from the memoryless-ness property of the channel. ( b ) follows from the definition of C , (8) and that we generate the correlated random experiment ( D ∗ , k ( Y )) with zero error probability.We repeatedly do the same analysis of the error µ ( i ,i )2 (see(5) for all pairs ( i , i ) ∈ { , . . . , N } , i = i . (9) impliesthat Pr {C ′ is not an ( n, N, λ , λ ) code } = Pr { [ i ,i ∈{ ,...,N } i = i µ ( i ,i )2 ≥ λ + 2 − mδ } ( a ) ≤ X i X i = i Pr { µ ( i ,i )2 ≥ λ + 2 − mδ }≤ N · ( N − · −|D ∗ | ( λmǫ ) ≤ −|D ∗ | ( λmǫ )+2 log N . ( a ) follows by the union bound. We know that the error ofthe first kind is upper bounded by − mδ . That means C ′ is notan ( n, N, λ , λ ) , if the error of the second kind exceeds λ . The probability that C is an ( n, N, λ , λ ) identification-codefor W f ( g , P ) is positive if −|D ∗ | ( λmǫ )+2 log N < ⇒ − |D ∗ | ( λmǫ ) + 2 log N < (10) = ⇒ n log log N < n m R − ǫ + 1 n log( λmǫ ) This implies that lim n →∞ n log log N ( n, λ ) ≥ lim m →∞ m R m − ǫ = ∞ This completes the proof. (cid:3)
Remark 17. If P = 0 , it is obvious that we can notdefine a code for the Gaussian channel. If σ = 0 , then thedeterministic identification capacity C IDf ( P, g ) is zero in thedouble exponential scale. In this case, the problem is reducedto the deterministic identification problem without feedback[9]. This is because the forward channel is noiseless. Thus,there is no source of randomness In other words, there is nocommon randomness experiment to be generated between thesender and the receiver as in the case of σ > . If σ = 0 ,the feedback channel has no advantage anymore. In [9], theauthors showed that the deterministic identification capacityof the Gaussian channel in the double exponential scale iszero.3) Proof of Theorem 13: For the proof of Theorem 13,we use the same coding scheme as presented in the proofof Theorem 11. We convert the correlated random experi-ment ( Y , W ( ·| x ∗ )) to a uniform one ( D ∗ , k ( W ( ·| x ∗ )) , where k ( W ( ·| x ∗ )) is uniformly distributed on the finite set D ∗ as inSection III-B1. We need to choose an appropriate cardinalityfor D ∗ . Let ϕ be an arbitrary strictly monotonically increasingfunction ϕ : R + → R + . Let λ ∈ (0 , , σ > and P > .In the following, we prove that the capacity C ϕIDf ( g, P ) as inDefinition 12 is infinite.For ǫ > , λ ∈ (0 , , we choose the cardinality of D ∗ asfollows. |D ∗ | = log (cid:0) ϕ ( m R ′ ) (cid:1) λmǫ , R ′ > . This can be done by choosing the quantization depth of ˜ Y inan appropriate way as defined in Section III-B1. We proceedanalogously as in Section III-B2. We choose as blocklength n = m + 1 . We define a a suitable set { F i , i = 1 , . . . , N } randomly such that (7) holds, where M < λ . Similarly to (6),the error of the first kind does not depend on the choice of D ∗ and is upper-bounded by − mδ . For M < λ for λ ∈ (0 , andfor our choice of { F i , i = 1 , . . . , N } , Lemma 16 is satisfied.Consequently the error of the second kind is upper-boundedby − mδ + λ as in (9). The probability that an ( n, N, λ , λ )2 ... l ... |D ∗ | ... j ... M ′′ F ... l ... |D ∗ | ... j ... M ′′ F N . . . . . . Fig. 2: Example of coloring functionsidentification feedback code for W f ( g , P ) exists is positive if(10) is satisfied, i.e., if N < |D ∗ | λmǫ . This implies that N ( n, λ ) ≥ |D ∗ | λmǫ = ⇒ lim n →∞ n ϕ − ( N ( n, λ )) ≥ lim n →∞ ϕ − (cid:16) |D ∗ | λmǫ (cid:17) n = lim m →∞ ϕ − (cid:16) log ( ϕ ( m R ′ ) ) (cid:17) m + 1= m R ′ m + 1 = + ∞ We showed that C ϕIDf ( g, P ) = + ∞ . Asymptotically, we donot pay any price for the error of the second kind by increasingthe cardinality of D ∗ . This completes the proof. (cid:3) C. Discussion
Noise increases capacity in the case of deterministic identi-fication with feedback. Intuitively, noise is a source of random-ness i.e., a random experiment whose outcome is provided tothe sender and the receiver via the feedback channel. Addingfeedback allows to set up a correlated random experimentbetween the sender and the receiver. The size of this randomexperiment can be used to compute the growth of the identifi-cation rate. We prove in Theorem 11 that if the noise varianceis positive and the maximum average power P is positive,then the identification capacity is infinite. This result alsoholds regardless of the selected scaling as stated in Theorem13. This is because our coding scheme allows to generateinfinite common randomness in one step. This result furtheremphasizes and extends the statement of Ahlswede in [21].This statement says that the value of the identification capacityin the double exponential of the simplest channels, namelyDMCs, coincides with the capacity of common randomness.If the noise variance is zero, then the identification capacityis zero. This is because the problem can be reduced to thedeterministic identification problem over the Gaussian channelwithout feedback [9]. It was shown that the correspondingcapacity in the double exponential scale is zero. As the forwardchannel is noiseless, there is no source of randomness. Thatmeans we can not take advantage of the feedback channel toset up a correlated random experiment. For randomized identification in the presence of noiselessfeedback, if σ > and P > , then the identification capacityis infinite regardless of the scale. Here, randomized encodinghas no further influence on the capacity. We can not achievegreater rates through randomized encoding.IV. C ONCLUSIONS
In this paper we considered the message identification viathe Gaussian channel with noiseless feedback without localrandomness and showed that arbitrary rates R ∈ R + can beachieved. The result was unexpected for us because the modelwithout feedback can transmit a maximum of n nC messages(see [9]) and it is known that feedback does not increase thecapacity of the channel during transmission. If we considerthe Gaussian channel with local randomness, then we canidentify at most nC messages. But here we have a completelynew effect: By adding noiseless feedback to the channel, thecapacity becomes infinitely large regardless of the scaling.This effect gives us new insights into identification theory. Oncloser inspection, this effect is due to the fact that we proposeda coding scheme that allows the generation of infinite commonrandomness between the sender and the receiver in one stepand use this common randomness for identification.As a future work, it is interesting to investigate commonrandomness generation from continuous correlated sources.Further studies are also to focus on identification in thepresence of noisy feedback over Gaussian channels.A CKNOWLEDGMENTS
We thank the German Research Foundation (DFG) withinthe Gottfried Wilhelm Leibniz Prize under Grant BO 1734/20-1 for their support of H. Boche and M. Wiese. Thanksalso go to the German Federal Ministry of Education andResearch (BMBF) within the national initiative for “PostShannon Communication (NewCom)” with the project “Ba-sics, simulation and demonstration for new communicationmodels” under Grant 16KIS1003K for their support of H.Boche, W. Labidi and with the project “Coding theory andcoding methods for new communication models” under Grant16KIS1005 for their support of C. Deppe. Further, we thankthe German Research Foundation (DFG) within Germany’sExcellence Strategy EXC-2111—390814868 and EXC-2092ASA - 390781972 for their support of H. Boche and M.Wiese. R
EFERENCES[1] H. Boche and C. Deppe, “Secure identification for wiretap channels;robustness, super-additivity and continuity,”
IEEE Transactions on In-formation Forensics and Security , vol. 13, no. 7, pp. 1641–1655, July2018.[2] G. Fettweis et al., “The tactile internet,” in
ITU-T Technol. WatchRep. ,2014, pp. 1–24.[3] P. Moulin, “The role of information theory in watermarking and itsapplication to image watermarking,”
Signal Processing , vol. 81, no. 6,pp. 1121 – 1139, 2001, special section on Information theoretic aspectsof digital watermarking.[4] R. Ahlswede and N. Cai,
Watermarking Identification Codes withRelated Topics on Common Randomness . Berlin, Heidelberg: SpringerBerlin Heidelberg, 2006, pp. 107–153.[5] Y. Steinberg and N. Merhav, “Identification in the presence of sideinformation with application to watermarking,”
IEEE Transactions onInformation Theory , vol. 47, no. 4, pp. 1410–1422, 2001.[6] W. Haselmayr, A. Springer, G. Fischer, C. Alexiou, H. Boche, P. Hoeher,F. Dressler, and R. Schober, “Integration of molecular communicationsinto future generation wireless networks,” 2019.[7] R. Ahlswede and G. Dueck, “Identification via channels,”
IEEE Trans-actions on Information Theory , vol. 35, no. 1, pp. 15–29, Jan 1989.[8] C. E. Shannon, “A mathematical theory of communication,”
Bell SystemTechnical Journal , vol. 27, pp. 379–423, 623–656, July, October 1948.[9] M. J. Salariseddigh, U. Pereg, H. Boche, and C. Deppe, “Deterministicidentification over channels with power constraints,” 2020.[10] ——, “Deterministic identification over fading channels,” in
IEEEInformation Theory Workshop (ITW), IEEE, 2020 , 2020, accepted.[11] R. Ahlswede and N. Cai, “Identification without randomization,”
IEEETransactions on Information Theory , vol. 45, pp. 2636–2642, 1999.[12] H. Boche, R. F. Schaefer, and H. Vincent Poor, “On the computabilityof the secret key capacity under rate constraints,” in
ICASSP 2019 -2019 IEEE International Conference on Acoustics, Speech and SignalProcessing (ICASSP) , May 2019, pp. 2427–2431.[13] R. Ezzine, W. Labidi, C. Deppe, and H. Boche, “Common randomnessgeneration and identification over gaussian channels,” in
GLOBECOM2020- IEEE Global Communications Conference (GLOBECOM) , 2020,pp. 1–6.[14] R. Ahlswede and G. Dueck, “Identification in the presence of feedback-adiscovery of new capacity formulas,”
IEEE Transactions on InformationTheory , vol. 35, no. 1, pp. 30–36, Jan 1989.[15] H. Boche, R. F. Schaefer, and H. V. Poor, “Identification capacityof channels with feedback: Discontinuity behavior, super-activation,and turing computability,”
IEEE Transactions on Information Theory ,vol. 66, no. 10, pp. 6184–6199, 2020.[16] C. Shannon, “The zero error capacity of a noisy channel,”
IRE Trans-actions on Information Theory , vol. 2, no. 3, pp. 8–19, 1956.[17] R. Ahlswede, “A constructive proof of the coding theorem for discretememoryless channels with feedback,”
Transactions of the Sixth PragueConference on Information Theory , pp. 39–50, 1973.[18] N. Gaarder and J. Wolf, “The capacity region of a multiple-accessdiscrete memoryless channel can increase with feedback (corresp.),”
IEEE Transactions on Information Theory , vol. 21, no. 1, pp. 100–102,1975.[19] G. Dueck, “Partial feedback for two-way and broadcast channels,”
Information and Control , vol. 46, no. 1, pp. 1 – 15, 1980.[20] G. Kramer, “Capacity results for the discrete memoryless network,” in
Proceedings of the 1999 IEEE Information Theory and CommunicationsWorkshop (Cat. No. 99EX253) , 1999, pp. 102–.[21] R. Ahlswede and N. Cai,
Transmission, Identification and CommonRandomness Capacities for Wire-Tape Channels with Secure Feedbackfrom the Decoder . Berlin, Heidelberg: Springer Berlin Heidelberg,2006, pp. 258–275.[22] ——,
The AVC with Noiseless Feedback and Maximal Error Probability:A Capacity Formula with a Trichotomy . Boston, MA: Springer US,2000, pp. 151–176.[23] R. Ahlswede and B. Verboven, “On identification via multiway channelswith feedback,”
IEEE Transactions on Information Theory , vol. 37,no. 6, pp. 1519–1526, 1991. [24] R. Ahlswede, “General theory of information transfer: Updated,”
Dis-crete Applied Mathematics , vol. 156, no. 9, pp. 1348 – 1388, 2008,general Theory of Information Transfer and Combinatorics.[25] T. S. Han,
Information-Spectrum Methods in Information Theory , ser.Stochastic Modelling and Applied Probability. Springer-Verlag BerlinHeidelberg, 2014.[26] M. V. Burnashev, “On method of ”types”, approximation of outputmeasures and id-capacity for channels with continuous alphabets,” in
Proceedings of the 1999 IEEE Information Theory and CommunicationsWorkshop (Cat. No. 99EX253) , June 1999, pp. 80–81.[27] W. Labidi, C. Deppe, and H. Boche, “Secure identification for gaussianchannels,” in
ICASSP 2020 - 2020 IEEE International Conference onAcoustics, Speech and Signal Processing (ICASSP) , 2020, pp. 2872–2876.[28] W. Hoeffding,