An Information Theoretic Analysis of Single Transceiver Passive RFID Networks
aa r X i v : . [ c s . I T ] J a n An Information Theoretic Analysis of SingleTransceiver Passive RFID Networks
Y¨ucel Altu ˘g, S. Serdar Kozat,
Member
M. Kıvanc¸ Mıhc¸ak
Member
Abstract
In this paper, we study single transceiver passive RFID networks by modeling the underlying physical system as a specialcascade of a certain broadcast channel (BCC) and a multiple access channel (MAC), using a “nested codebook” structure inbetween. The particular application differentiates this communication setup from an ordinary cascade of a BCC and a MAC,and requires certain structures such as “nested codebooks”, impurity channels or additional power constraints. We investigatethis problem both for discrete alphabets, where we characterize the achievable rate region, as well as for continuous alphabetswith additive Gaussian noise, where we provide the capacity region. Hence, we establish the maximal achievable error freecommunication rates for this particular problem which constitutes the fundamental limit that is achievable by any TDMA basedRFID protocol and the achievable rate region for any RFID protocol for the case of continuous alphabets under additive Gaussiannoise.
I. I
NTRODUCTION
In this paper, we deal with a multiuser communication setup which consists of “cascade” of a broadcast channel (BCC) anda multiple access channel (MAC). The encoder of BCC part and the decoder of the MAC part is the same transceiver, andthe decoders of the BCC part and the encoders of the MAC part are the mobile units of the system. The ultimate goal of thecommunication system considered in the paper is the following: transceiver wants to “find out” some specific informationpossessed by the mobile units and for this purpose it first broadcasts the “type” of the information it seeks to receive from eachmobile unit. Then every mobile unit “sends” the corresponding information of the received type to the transceiver. The specifictype of information phenomenon differentiates the system at hand from the ordinary cascade of BCC and MAC, because inorder to model this situation we employ a nested codebook structure at the MAC encoders, i.e. at the mobile units, which willbe explained in detail in Section II-B.Beyond its promising structure to model wireless communication networks, the problem at hand gives the fundamental limitsof RFID protocols in two different ways, supposing the transceiver is RFID reader, mobile units are RFID tags and the RFIDreader knows the set of the IDs of the RFID tags in the environment:(i) The above mentioned communication problem gives the fundamental limits achievable in TDMA based RFID protocols,since the transceiver sends the TDMA time slots, which are designated to allow communication in a collusion freemanner, using the BCC part and then mobile units uses their corresponding time slot information in order to transmittheir data to the RFID reader. Supposing equal information rate, say R ID , at each BCC branch, the maximum number Y. Altu˘g is with the School of Electrical and Computer Engineering of Cornell University, Ithaca, NY, 14853, USA. (e-mail: [email protected]), M. K. Mıhc¸akare with the Electrical and Electronic Engineering Department of Bo˘gazic¸i University, Istanbul, 34342, Turkey (e-mail: [email protected]), S. S. Kozatis with the Electrical and Electronic Engineering Department of Koc¸ University, Rumeli Feneri Yolu, Sarıyer, Istanbul, 34450, Turkey (e-mail: [email protected])Y. Altu˘g is partially supported by T ¨UB˙ITAK Career Award no. 106E117; M. K. Mıhc¸ak is partially supported by T ¨UB˙ITAK Career Award no. 106E117and T ¨UBA-GEBIP Award. In practical RFID systems, the problem of reader collusion is also considered, which amounts to having multiple transceivers in our setup. In our case,we concentrate on the “single reader (transceiver)” setup as first step. of RFID tags that can be handled is R ID and the maximum data rate from tags to reader is the maximum rate that canbe achieved using TDMA at the MAC part of the communication system.(ii) The above mentioned communication problem gives the fundamental limits of any RFID protocol, since the RFIDreader transmits “on-off” message from the BCC to tags, and then tags communicate back their data through the MACsimultaneously to the reader. The achievable rate region of the MAC part is the fundamental limit of any RFID protocolunder the assumption that receiver knows the set of the IDs of the RFID tags in the environment.The nested codebook structure used in the MAC part of this paper is similar to the “pseudo users” concept introduced in[4], where the authors investigate a special notion of capacity for time slotted ALOHA systems by combining multiple accessrate splitting and broadcast codes. However, in [4], the authors explicitly investigate the ALOHA protocol over a degradedadditive Gaussian noise channel, where users communicate over a common channel using data packets with predefined collusionprobability. Unlike [4], our codes achieve the capacity in the usual sense, where the codewords are sent with arbitrarily smallerror probability. We also investigate a cascade structure including a BCC in the front and a different MAC in the end. Westudy this setup both for discrete alphabets using imperfection channels to model the impurities of the actual physical systemas well as for continuous alphabets over additive Gaussian noise channel by including appropriate power constraints.We note that the nested codebook structure used in this paper differs from the nested codes defined in [5], [6]. In [5] nestedcodebooks, especially nested lattices codes, are explicitly defined with a multi-resolution point of view, where the nesting ofcodes provide progressively coarser description to finer description of the intended information. Here, our nested codebooksare independent from each other and convey different information.Organization of the paper is as follows: In Section II we state the notation followed throughout the paper and formulate thecommunication problem considered in the paper. Section III devoted to derive an achievable rate region of the problem for thecase of discrete alphabets, by also including “imperfection channels” in order to model the practical phenomenon better. InSection IV, we state the capacity region of the problem for the case of Gaussian BCC and Gaussian MAC by also incorporatingsuitable power constraints. Paper ends with the conclusions given in Section V.II. N OTATION AND P ROBLEM S TATEMENT
A. Notation
Boldface letters denote vectors; regular letters with subscripts denote individual elements of vectors. Furthermore, capitalletters represent random variables and lowercase letters denote individual realizations of the corresponding random variable. Thesequence of { a , a , . . . , a N } is compactly represented by a N . The abbreviations “i.i.d.”, “p.m.f.” and “w.l.o.g.” are shorthands This on-off message also meaningful in practice as far as passive RFID tags are concerned, since they need to facilitate an external energy in order tooperate for the terms “independent identically distributed”, “probability mass function” and “without loss of generality”, respectively.
B. Problem Statement
In this paper, our major concern is finding maximum achievable error-free rates for the following multiuser communicationproblem (For the sake of simplicity, we define the problem for the case of two mobile units, however all of the results caneasily be generalized to M users using the same arguments employed in the paper): A transceiver first acts as a transmitter and broadcasts a pair of messages, ( W , W ) ∈ W × W , to mobile units through the first memoryless communication channel.Mobile units decode the messages intended to them, i.e. first (resp. second) mobile unit decides ˆ W (resp. ˆ W ), and then choosetheir messages accordingly, i.e. first (resp. second) mobile unit chooses M ∈ M ˆ W (resp. M ∈ M ˆ W ), and simultaneously sends to transceiver, which this time acts as a receiver, through the second memoryless communication channel.Next, we give the quantitative definition of the communication system considered: Definition 2.1:
The above-mentioned communication system consists of the following components:(i) Eight discrete finite sets X , Y , Y , Q , Q , ˆ Q , ˆ Q , S .(ii) A one-input two-output, discrete memoryless communication channel, termed as “broadcast channel part” or shortly BCCpart from now on, modeled by a conditional p.m.f. p ( y , y | x ) ∈ Y × Y × X . Using the memoryless property, we havethe following expression for the n-th extension of the BCC part: p ( y n , y n | x n ) = n Y k =1 p ( y k , y k | x k ) . (1)(iii) The memoryless “imperfections channel”, which models the impurities and the instantaneous erroneous behavior at themobile units (especially useful in the modeling of the RFID tags), given by a conditional p.m.f. p (ˆ q i | q i ) ∈ ˆ Q × Q i .Using the memoryless property, we have the following expression for the n-th extension of the i-th imperfection channel p (ˆ q ni | q ni ) = n Y k =1 p (ˆ q i,k | q i,k ) , (2)for i ∈ { , } .(iv) A two-input one-output, discrete memoryless communication channel, termed as “multiple access channel part” or shortlyMAC part from now on, given by a conditional p.m.f. p ( s | ˆ q , ˆ q ) ∈ S × ˆ Q × ˆ Q . Using the memoryless property, wehave the following expression for the n-th extension of the MAC part: p ( s n | ˆ q n , ˆ q n ) = n Y k =1 p ( s k | ˆ q ,k , ˆ q ,k ) . (3)Next, we state the code definition Definition 2.2: An (cid:16) nR ID , nR ID , nR Data , nR Data , n (cid:17) code for the communication system given above consists of thefollowing parts: (i) Pair of transmitter messages, termed as “broadcast channel messages” or shortly BCC messages from now on, to mobileunits given as ( W , W ) ∈ W × W , where W i △ = n , . . . , nR IDi o for i ∈ { , } .(ii) The transceiver’s encoding function, termed as “broadcast channel encoder” or shortly BCC encoder from now on, givenas X BCC : W × W → X n , such that X BCC ( W , W ) = x n ( W , W ) . (4)(iii) The mobile units’ decoding functions, termed as “broadcast channel decoders” or shortly BCC decoders from now on,given by g BCCi : Y ni → W i ∪ { } , such that g BCCi ( Y n ) = ˆ W i , for i ∈ { , } , where { } corresponds to “miss-type”error event.(iv) The mobile units’ messages corresponding to decoded BCC messages ˆ W i , termed as “multiple access channel messages”or shortly MAC messages from now on, M i ∈ M ˆ W i i , where M ˆ W i i △ = n , . . . , nR Datai o , for i ∈ { , } . Note that thisis the message part of a “nested codebook structure” corresponding to the decoded message ˆ W i at each mobile unit.(v) The mobile units’ encoding function, termed as “multiple access channel encoders” or shortly MAC encoders from nowon, given by Q MACi : M ˆ W i i → Q ni , for i ∈ { , } , such that Q MACi ( M i ) = q n ˆ W i ( M i ) . Note that q n ˆ W i ( M i ) ’s are thecodewords of the “nested codebook structure” corresponding to the decoded message ˆ W i at each mobile unit.(vi) The transceiver’s decoding function, termed as “multiple access channel decoder” or shortly MAC decoder from nowon, given by g MAC : S n → M W × M W .(vii) Decoded messages at the transceiver: (cid:16) ˆ M , ˆ M (cid:17) ∈ M W × M W . Note that since transceiver knows ( W , W ) pair andtries to “learn” the corresponding ( M , M ) pairs simultaneously, hence it chooses ( M , M ) -th messages from the set M W × M W .Obviously, the communication system may be intuitively considered as a cascade of a two user “broadcast channel”[1] and atwo user “multiple access channel”[1] with the following modifications: first the employment of the nested codebook structureat the MAC encoders and the imperfections channels included. The aforementioned modified cascade, including the encoders,codewords and decoders at both BCC and MAC part is shown in Figure 1 below:Now, we state following “probability of error” related definitions, which will be used throughout the paper. Definition 2.3: (i) The conditional probability of error , λ i , for the communication system is defined by: λ w ,w ,m ,m △ = 1 − Pr (cid:16)h ( ˆ W , ˆ W ) = ( w , w ) | ( W , W ) = ( w , w ) i ∧ h ( ˆ M , ˆ M ) = ( m , m ) | ( M , M ) = ( m , m ) i(cid:17) , (5) ( ) BCC X (cid:215) ( , ) W W ( , ) n W W x ( , | ) p y y x Y Y ^ 1 W ^ 2 W ^1 W M ˛ (cid:1) ^ 2 W M ˛ (cid:0) ( ) BCC g (cid:215) ( ) BCC g (cid:215) ( ) MAC Q (cid:215) ( ) MAC Q (cid:215) ^1 ( ) nW M q ^1 ^ 1 1, ( | ) w p q q ^2 ^ 2 2, ( | ) w p q q ^2 ( ) nW M q ^ Q ^ Q ^ ^1 2 ( | , ) p s q q ( ) MAC g (cid:215) ^ ^1 2 ( , ) M M , ˛ (cid:2) (cid:3)(cid:4) (cid:5) Fig. 1. Block Diagram Representation of the multiuser communication system considered in the paper. and the maximal probability of error , λ ( n ) , for the communication system is defined by: λ ( n ) △ = max w ,w ,m ,m λ w ,w ,m ,m . (6)(ii) The conditional probability of error for the BCC part , λ BCCi , is defined by: λ w ,w BCC △ = Pr (cid:16) ( ˆ W , ˆ W ) = ( w , w ) | ( W , W ) = ( w , w ) (cid:17) , (7)and the average probability of error for the BCC part , P ( n ) e,BCC , is defined by: P ( n ) e,BCC △ = Pr (cid:16)(cid:16) ˆ W , ˆ W (cid:17) = ( W , W ) (cid:17) , (8)(iii) The conditional probability of error for the MAC part , λ MACi , is defined by: λ m ,m MAC △ = Pr (cid:16) ( ˆ M , ˆ M ) = ( m , m ) | ( M , M ) = ( m , m ) , ( ˆ W , ˆ W ) = ( w , w ) (cid:17) , (9)and the average probability of error for the MAC part , P ( n ) e,MAC is defined by: P ( n ) e,MAC △ = Pr (cid:16)(cid:16) ˆ M , ˆ M (cid:17) = ( M , M ) | (cid:16) ˆ W , ˆ W (cid:17) = ( w , w ) (cid:17) . (10)Note that, using (5),(7) and (9) we conclude that λ w ,w ,m ,m = 1 − (1 − λ w ,w BCC )(1 − λ m ,m MAC ) . (11)Next, achievability is defined as Definition 2.4:
Any rate quadruple (cid:0) R ID , R ID , R Data , R Data (cid:1) is said to be achievable if there exists a sequence of codes (cid:16) nR ID , nR ID , nR Data , nR Data , n (cid:17) such that λ ( n ) → as n → ∞ .III. D ISCRETE C ASE
In this section, we deal with the problem stated in Section II-B under the discrete random variables assumption.
A. Achievable Region for The General Case
The main result of this section is the following theorem:
Theorem 3.1: (Achievability-Discrete Case)
Any quadruple (cid:0) R ID , R ID , R Data , R Data (cid:1) ∈ R is achievable, where R △ = (cid:8) ( R ID , R ID , R Data , R Data ) : R ID , R ID , R Data , R Data ≥ , R ID < I ( U ; Y ) , R ID < I ( V ; Y ) ,R ID + R ID < I ( U ; Y ) + I ( V ; Y ) − I ( U ; V ) , R Data < I ( ˆ Q ; S | ˆ Q ) , R Data < I ( ˆ Q ; S | ˆ Q ) ,R Data + R Data < I ( ˆ Q , ˆ Q ; S ) , for some p ( u, v, x ) on U × V × X and p ( q , q , s ) on Q × Q × S , where p ( q , q , s ) △ = X ˆ q , ˆ q p ( s | ˆ q , ˆ q ) p (ˆ q | q ) p (ˆ q | q ) p ( q ) p ( q ) , for some p ( q ) , p ( q ) on Q , Q , respectively . (12) Proof:
Proof follows combining arguments from [2] and [1] for BCC and MAC parts, respectively; by also takingimperfection channels and nested codebook structure into account.W.l.o.g. we suppose ǫ ∈ (0 , . First, define A ( n ) ǫ ( U ) (resp. A ( n ) ǫ ( V ) ) as the set of ǫ -typical sequences [1] u n ∈ U n (resp. v n ∈ V n ) for any given p ( u ) (resp. p ( v ) ) on U (resp. V ).Next, for w ∈ { , . . . , nR ID } , we define following cells: B w △ = h ( w − n ( I ( U ; Y ) − R ID − ǫ ) + 1 , w n ( I ( U ; Y ) − R ID − ǫ ) i . Similarly, for resp. w ∈ { , . . . , nR ID } , we define: C w △ = h ( w − n ( I ( V ; Y ) − R ID − ǫ ) + 1 , w n ( I ( V ; Y ) − R ID − ǫ ) i , w.l.o.g. supposing that n ( I ( U ; Y ) − R ID − ǫ ) , n ( I ( V ; Y ) − R ID − ǫ ) ∈ Z + . Encoding at BCC part: i) Generation of the codebook: Generate the codebook C BCC ∈ X nRID × X nRID × X n such that ( i, j, m ) -th element is x m ( i, j ) and x m ( i, j ) s are i.i.d. realizations of X of which distribution is p ( x ) = P u,v p ( u, v, x ) for all i, j, m and revealthe codebook to both mobile units and transceiver.ii) Choose an ( W , W ) ∈ W × W uniformly over W × W , i.e. Pr( W = w , W = w ) = 1 / (cid:16) nR ID nR ID (cid:17) , for all ( w , w ) ∈ W × W .iii) Next, generate n ( I ( U ; Y ) − ǫ ) , i.i.d. u n , such that p ( u n ) = ( | A ( n ) ǫ ( U ) | , if u n ∈ A ( n ) ǫ ( U )0 , otherwise Since we want to show that λ ( n ) → as n → ∞ , this will suffice. To see this, observe that in the proof of the theorem, we show that for any sufficientlylarge n and for any ǫ ∈ (0 , , λ ( n ) ≤ ǫ , which directly implies λ ( n ) ≤ ǫ ′ for any ǫ ′ ≥ . Similarly, generate n ( I ( V ; Y ) − ǫ ) , i.i.d. v n , such that p ( v n ) = ( | A ( n ) ǫ ( V ) | , if v n ∈ A ( n ) ǫ ( V )0 , otherwiseLabel these u n ( k ) (resp. v n ( l ) ), k ∈ (cid:2) , n ( I ( U ; Y ) − ǫ ) (cid:3) (resp. l ∈ (cid:2) , n ( I ( V ; Y ) − ǫ ) (cid:3) ).iv) If a message pair ( w , w ) is to be transmitted, pick one pair ( u n ( k ) , v n ( l )) ∈ A ( n ) ǫ ( U, V ) ∩ B w × C w . Then, find an x ( w , w ) which is jointly ǫ -typical with ( w , w ) pair and designate it as the corresponding codeword of ( w , w ) . Sendover the BCC part, p ( y , y | x ) . Decoding at BCC part: i) Find the indexes ˆ k (resp. ˆ l ) such that ( u n (ˆ k ) , y ) ∈ A ( n ) ǫ ( U, Y ) (resp. ( v n (ˆ l ) , y ) ∈ A ( n ) ǫ ( V, Y ) ). If ˆ k, ˆ l are not uniqueor does not exist, declare an error, i.e. ˆ W = 0 and/or ˆ W = 0 . Else, decide ˆ W ∈ W (resp. ˆ W ∈ W ) at mobile unitone (resp two), such that ˆ k ∈ B ˆ W (resp. ˆ l ∈ C ˆ W ). Encoding at MAC part: i) Generation of the codebook(Nested codebook structure): Fix p ( q ) , p ( q ) . Let p ( q , q ) = p ( q ) p ( q ) . Generate the w i -thcodebook C w i MAC ∈ Q nRDatai i × Q ni such that ( j, k ) -th element is q w i ,k ( j ) and q w i ,k ( j ) s are i.i.d. realizations of Q i ofwhich distribution is p ( q i ) for all j ∈ { , . . . , nR Datai } , k ∈ { , . . . , n } and i ∈ { , } .ii) Choose a message M i ∈ M ˆ W i i uniformly for the ˆ W i decided at the BCC part, i.e. Pr( M i = m i ) = nRDatai , for all m i ∈ M ˆ W i i and for i ∈ { , } . In order to send the message m i , pick the corresponding codeword q n ˆ W i ( m i ) of C ˆ W i MAC and send over the imperfection channel p (ˆ q i | q ˆ W i ) resulting in ˆ q ni for i ∈ { , } . The pair of (ˆ q , ˆ q ) is the input to theMAC part, p ( s | ˆ q , ˆ q ) . Decoding at MAC part: i) Find the pair of indexes (cid:16) ˆ M , ˆ M (cid:17) ∈ M w × M w such that ( q nw ( ˆ M ) , q nw ( ˆ M ) , s n ) ∈ A ( n ) ǫ ( Q , Q , S ) , where A ( n ) ǫ ( Q , Q , S ) is the ǫ -typical set with respect to distribution p ( q , q , s ) = X ˆ q , ˆ q p ( s | ˆ q , ˆ q , q , q ) p (ˆ q , ˆ q | q , q ) p ( q ) p ( q ) , (13) = X ˆ q , ˆ q p ( s | ˆ q , ˆ q ) p (ˆ q , ˆ q | q , q ) p ( q ) p ( q ) , (14) = X ˆ q , ˆ q p ( s | ˆ q , ˆ q ) p (ˆ q | q ) p (ˆ q | q ) p ( q ) p ( q ) , (15)where (13) follows since p ( q , q ) = p ( q ) p ( q ) (cf. the codebook generation of MAC part), (14) follows since MACchannel depends on only (ˆ q , ˆ q ) and (15) follows since imperfection channels are independent and depends on only q and q , respectively. If such a (cid:16) ˆ M , ˆ M (cid:17) pair does not exist or is not unique, then declare an error, i.e. ˆ M = 0 and/or ˆ M = 0 ; otherwisedecide (cid:16) ˆ M , ˆ M (cid:17) . Analysis of Probability of Error:
We begin with BCC part. By defining the error event as E BCC △ = n ( ˆ W ( Y n ) , ˆ W ( Y n )) = ( W , W ) o , we have the followingexpression for the average probability of error averaged over all messages, ( w , w ) , and codebooks, C BCC P ( n ) e,BCC = Pr (cid:0) E BCC (cid:1) , = Pr (cid:0) E BCC | ( W , W ) = (1 , (cid:1) , (16)where (16) follows by noting the equality of arithmetic average probability of error and the average probability of error givenin (8) and the symmetry of the codebook construction at the BCC part.Next, we define following type of error events: E BCC △ = n ∄ ( u n ( k ) , v n ( l )) ∈ ( B × C ) ∩ A ( n ) ǫ ( U, V ) o , (17) E BCC △ = n ( u n ( k ) , v n ( l ) , x n ( w , w ) , y n , y n ) A ( n ) ǫ ( U, V, X, Y , Y ) o , (18) E BCC △ = n ∃ ˆ k = k, s.t. ( u n (ˆ k ) , y n ) ∈ A ( n ) ǫ ( U, Y ) o , (19) E BCC △ = n ∃ ˆ l = l, s.t. ( v n (ˆ l ) , y n ) ∈ A ( n ) ǫ ( V, Y ) o , (20)where (17) corresponds to the failure of the encoding, (19) (resp. (20)) corresponds to the failure of the decoding at mobileunit one (resp. mobile unit two).Using typicality arguments, it can be shown that Pr (cid:0) E BCCi (cid:1) ≤ ǫ/ for i ∈ { , , } and Lemma 1 of [2] also guaranteesthat Pr (cid:0) E BCC (cid:1) ≤ ǫ/ . Using these facts and the union bound, we conclude that P ( n ) e,BCC = Pr( E BCC ) = Pr( E BCC | ( W , W ) = (1 , ≤ ǫ, (21)for any ǫ > , for sufficiently large n ; provided that I ( U ; Y ) > R ID + ǫ , I ( V ; Y ) > R ID + ǫ , I ( U ; Y )+ I ( V ; Y ) − I ( U ; V ) >R ID + R ID + 2 ǫ + δ ( ǫ ) , such that δ ( ǫ ) → as ǫ → .Further, using standard arguments for finding a code with negligible maximal probability of error (cf. [1] pp. 203-204) fromthe one with P ( n ) e,BCC ≤ ǫ we conclude that we have λ ( n ) BCC △ = max w ,w λ w ,w BCC ≤ ǫ, (22)for any ǫ > and for sufficiently large n , which concludes the BCC part.By defining the error event as E MAC △ = n(cid:16) ˆ M ( S n ) , ˆ M ( S n ) (cid:17) = ( M , M ) | (cid:16) ˆ W , ˆ W (cid:17) = ( w , w ) o , we have the followingexpression for the average probability of error averaged over all messages, ( m , m ) , and codebooks corresponding to the messages, C w MAC and C w MAC P ( n ) e,MAC = Pr (cid:0) E MAC (cid:1) , = Pr (cid:0) E MAC | ( M , M ) = (1 , (cid:1) , (23)where (23) follows by noting the equality of arithmetic average probability of error and the average probability of error givenin (10) and the symmetry of the nested codebook construction at the MAC part.Next, we define the following events E MACij △ = n ( q nw ( i ) , q nw ( j ) , s n ) ∈ A ( n ) ǫ ( Q , Q , S ) o , (24)Using union bound and appropriately bounding each error event by exploiting typicality arguments, one can show that P ( n ) e,MAC = Pr (cid:0) E MAC (cid:1) = Pr (cid:0) E MAC | ( M , M ) = (1 , (cid:1) ≤ ǫ, (25)for any ǫ > and sufficiently large n ; provided that I ( Q ; S | Q ) − R Data > ǫ , I ( Q ; S | Q ) − R Data > ǫ and I ( Q , Q ; S ) − ( R Data + R Data ) > ǫ .Further, using standard arguments for finding a code with negligible maximal probability of error (cf. [1] pp. 203-204) fromthe one with P ( n ) e,MAC ≤ ǫ we conclude that we have λ ( n ) MAC △ = max m ,m λ m ,m MAC ≤ ǫ, (26)for any ǫ > and for sufficiently large n , which concludes the MAC part.Next, we sum up things and conclude the proof in the following manner.First, by plugging (11) in (6), we have λ ( n ) = max λ w ,w BCC ,λ m ,m MAC λ w ,w BCC + λ m ,m MAC − λ w ,w BCC λ m ,m MAC . (27)Further, using the fact that the cost function in (27) is monotonic increasing in both λ w ,w BCC and λ m ,m MAC , we conclude that (cf.(22) and (26)) λ ( n ) ≤ ǫ − ǫ , (28)for any < ǫ < and sufficiently large n . Since ǫ may be arbitrarily small, (28) concludes the proof.IV. P OWER C ONSTRAINED G AUSSIAN C ASE
A. Problem Statement
In this section, we generalize the communication problem stated in Section II-B to continuous random variables under theassumption of Gaussian noise and power constraint on the codebooks. To be more precise we have the problem depicted in Figure 2, with the power constraints: E (cid:2) X (cid:3) ≤ P, (29)E h ( Q , ˆ W ) i ≤ α P , (30)E h ( Q , ˆ W ) i ≤ α P , (31)such that α , α < and P + P ≤ P , where P (resp. P ) is the power delivered to mobile unit one (resp. two) and w.l.o.g.we assume that N < N . ( ) BCC X (cid:215) ( , ) W W ( , ) x W W Y Y ^ 1 W ^ 2 W ^1 W M ˛ (cid:6) ^ 2 W M ˛ (cid:7) ( ) BCC g (cid:215) ( ) BCC g (cid:215) ( ) MAC Q (cid:215) ( ) MAC Q (cid:215) ^1 ( ) W q M ^2 ( ) W q M ( ) MAC g (cid:215) ^ ^1 2 ( , ) M M , ˛ (cid:8) (cid:9) ~ (0, ) Z N (cid:10) ~ (0, )
Z N (cid:11) ~ (0, )
Z N (cid:12)
Fig. 2. Block Diagram Representation of the multiuser communication system under Gaussian noise assumption.
Note that both Definition 2.1 (excluding imperfection channels, which are irrelevant for this case) and Definition 2.2 arevalid for this case, with X = Q = Q = S = R . Remark 4.1: (i) Observe that, we model the “imperfection channel” of discrete case as an additional power constraint for the Gaussiancase.(ii) BCC part for the Gaussian case at hand is equivalent to “degraded BCC”, which enables us to state the capacity region instead of characterizing achievable region only.
B. Capacity Region for Gaussian Case
In this section, we state the capacity region of the communication system given in Section IV-A. Note that throughout thesection, all the logarithms are base e , in other words the unit of information is “nats”. Theorem 4.1:
The capacity region, R ⊂ R , of the system shown in Figure 2 is given by R △ = (cid:26) ( R ID , R ID , R Data , R Data ) : R ID , R ID , R Data , R Data ≥ , R ID <
12 log (cid:18) αPN (cid:19) ,R ID <
12 log (cid:18) − α ) PN + αP (cid:19) , R Data <
12 log (cid:18) αα PN (cid:19) , R Data <
12 log (cid:18) − α ) α PN (cid:19) ,R Data + R Data <
12 log (cid:18) αα P + (1 − α ) α PN (cid:19) , s. t. ≤ α ≤ , ≤ α , α ≤ (cid:27) , (32)where α may be chosen arbitrarily in the given range and α and α are system parameters.
1) Achievability:
In section, we prove the forward part of Theorem 4.1, in other words following theorem:
Theorem 4.2:
Any rate quadruple ( R ID , R ID , R Data , R Data ) ∈ R , there exists a sequence of (cid:16) nR ID , nR ID , nR Data , nR Data , n (cid:17) codes with arbitrarily small probability of error for sufficiently large n , provided that
12 log (cid:18) αPN (cid:19) > R ID + ǫ, (33)
12 log (cid:18) − α ) PαP + N (cid:19) > R ID + ǫ, (34)
12 log (cid:18) α αPN (cid:19) > R Data + 3 ǫ, (35)
12 log (cid:18) α (1 − α ) PN (cid:19) > R Data + 3 ǫ, (36)
12 log (cid:18) α αP + α (1 − α ) PN (cid:19) > R Data + R Data + 4 ǫ, (37)for any ǫ > , ≤ α ≤ and ≤ α , α ≤ . Proof:
In order to prove the theorem, we use superposition coding [1] at BCC part and standard random coding at MACpart. W.l.o.g. suppose ǫ ∈ (0 , / . Encoding at BCC part: i) Generation of the codebook:(Superposition Coding) Generate codebook, C BCC (resp. C BCC ) with corresponding rate R ID (resp. R ID ) such that both R ID and R ID satisfy the conditions (33), (34) and (35) where C BCC △ = [ x ,i ( w )] , (38)such that each x ,i ( w ) are i.i.d. realizations of X ∼ N (0 , αP − ǫ/ and C BCC △ = [ x ,i ( w )] , (39)such that each x ,i ( w ) are i.i.d. realizations of X ∼ N (0 , (1 − α ) P − ǫ/ . Reveal both C BCC and C BCC to eachmobile unit. Since we want to show that λ ( n ) → as n → ∞ , this will suffice. To see this, observe that in the proof of the theorem, we show that for any sufficientlylarge n and for any ǫ ∈ (0 , / , λ ( n ) ≤ ǫ , which directly implies λ ( n ) ≤ ǫ ′ for any ǫ ′ ≥ / . ii) Choose a message pair ( w , w ) ∈ W × W , uniformly over W × W , i.e. Pr( W = w , W = w ) = 1 / n ( R ID + R ID ) ,for all ( w , w ) ∈ W × W .iii) In order to send message ( w , w ) , take x n ( w ) from C BCC and x n ( w ) from C BCC and send x n ( w , w ) △ = x n ( w ) + x n ( w ) over the BCC to both sides, yielding Y △ = x n ( w , w ) + Z at mobile unit one and Y △ = x n ( w , w ) + Z at mobile unit two, where Z and Z are arbitrarily correlated with following marginal distributions Z ∼ N (0 , N ) , Z ∼ N (0 , N ) . Note that law of large numbers ensures x n ( w , w ) satisfies the power constraint of (29). Decoding at BCC part: i) Upon receiving y n , second mobile unit performs jointly typical decoding, i.e. decides the unique ˆ W ∈ W such that (cid:16) y n , x n ( ˆ W ) (cid:17) ∈ A ( n ) ǫ ( X , Y ) . If such a ˆ W ∈ W does not exist or is not unique, then declares an error, i.e. W = 0 .Mobile unit one also performs the same jointly typical decoding first with y n in order to decide the unique ˆ W ∈ W such that (cid:16) y n , x n ( ˆ W ) (cid:17) ∈ A ( n ) ǫ ( X , Y ) . If such ˆ W ∈ W does not exist or is not unique, then declares an error, i.e. W = 0 . After deciding on ˆ W , mobile unit one calculates the corresponding y n △ = y n − x n ( ˆ W ) and then performsjointly typical decoding, i.e. decides the unique ˆ W ∈ W such that (cid:16) y n , x n ( ˆ W ) (cid:17) ∈ A ( n ) ǫ ( X , Y ) . If such a ˆ W ∈ W does not exist or is not unique, then declares an error, i.e. ˆ W = 0 . Encoding at MAC part: i) Generation of Codebook (Nested Codebook Structure): Fix f ( q ) , f ( q ) . Let f ( q , q ) = f ( q ) f ( q ) . Generate the w -th(resp. w -th) codebook as C w MAC △ = [ q w ,j ( m )] (resp. C w MAC △ = [ q w ,j ( m )] ), such that q w ,j ( m ) (resp. q w ,j ( m ) ) arei.i.d. realizations of Q ∼ N (0 , α αP − ǫ ) (resp. Q ∼ N (0 , α (1 − α ) P − ǫ ) ) for all w ∈ { , . . . , nR ID } (resp. w ∈ { , . . . , nR ID } ), m ∈ { , . . . , nR Data } (resp. m ∈ { , . . . , nR Data } ) and j ∈ { , . . . , n } .ii) Choose a message M i ∈ M ˆ W i i uniformly, i.e. Pr( M i = m i ) = 1 / nR Datai , for all m i ∈ M ˆ W i i and for i ∈ { , } . Inorder to send a message m i , take the corresponding codeword q n ˆ W i of C ˆ W i MAC and send over the MAC, for i ∈ { , } ,resulting in S n △ = q n ˆ W + q n ˆ W + Z n . Decoding at MAC part: i) Find the pair of indexes ( ˆ M , ˆ M ) ∈ M w × M w such that ( q w ( ˆ M ) , q w ( ˆ M ) , s n ) ∈ A ( n ) ǫ ( Q , Q , S ) . If such a pairdoes not exist or is not unique, then declare an error, i.e. ˆ M = 0 and/or ˆ M = 0 ; otherwise decide ( ˆ M , ˆ M ) . Analysis of Probability of Error:
We begin with the BCC part. First, note that (16) is still valid as well as the error eventdefinition. Next, we define following type of error events E BCC △ = n n X j =1 x j (1 , > P , (40) E BCC ,i △ = n ( x n ( i ) , y n ) ∈ A ( n ) ǫ ( X , Y ) , s.t. i = 1 o , (41) E BCC ,j △ = n ( x n ( j ) , y n ) ∈ A ( n ) ǫ ( X , Y ) , s.t. j = 1 o , (42) E BCC ,k △ = n ( x n ( k ) , y n ) ∈ A ( n ) ǫ ( X , Y ) , s.t. k = 1 o , (43)where (40) corresponds to the violation of the power constraint, (41) corresponds to the failure of the first step of the decodingat the mobile unit one, (42) corresponds to the failure of the second step of the decoding at the mobile unit one, (43) correspondsto the failure of the decoding at the mobile unit two.Using union bound and appropriately bounding the probability of each error event term by using arguments of typicality(except for the power constraint, which follows from law of large numbers), one can show that P ( n ) e,BCC = Pr (cid:0) E BCC (cid:1) = Pr (cid:0) E BCC | ( W , W ) = (1 , (cid:1) ≤ ǫ, (44)for any ǫ > and sufficiently large n , provided that log (cid:16) αPN (cid:17) − R ID > ǫ (cf. (33)), log (cid:16) (1 − α ) PαP + N (cid:17) − R ID > ǫ (cf. (34)) and log (cid:16) (1 − α ) Pα + N (cid:17) − R ID > ǫ ( which is guaranteed by recalling N < N and (33).Further, using standard arguments for finding a code with negligible maximal probability of error (cf. [1] pp. 203-204) fromthe one with P ( n ) e,BCC ≤ ǫ we conclude that we have λ ( n ) BCC △ = max w ,w λ w ,w BCC ≤ ǫ, (45)for any ǫ > and sufficiently large n , provided that (33) and (34) hold, which concludes the BCC part.Now, we continue with the MAC part and note that (23) is still valid as well as the error event definition. We additionallyinclude the following type of error event, which deals with the power constraints E MAC ,i △ = n n X j =1 q w i ,j (1) > α i P i , (46)for i ∈ { , } , such that P = αP and P = (1 − α ) P and α is the same as the one given in BCC case.Using union bound and appropriately bounding the probability of each error event term by using arguments of typicality(except for the power constraint related terms, which follow from law of large numbers), one can show that P ( n ) e,MAC = Pr (cid:0) E MAC (cid:1) = Pr (cid:0) E MAC | ( M , M ) = (1 , (cid:1) ≤ ǫ, (47)for any ǫ > and sufficiently large n , provided that log (cid:16) α αPN (cid:17) > R Data + 3 ǫ , log (cid:16) α (1 − α ) PN (cid:17) > R Data + 3 ǫ , log (cid:16) α αP + α (1 − α ) PN (cid:17) > R Data + R Data + 4 ǫ .Further, using standard arguments for finding a code with negligible maximal probability of error (cf. [1] pp. 203-204) fromthe one with P ( n ) e,MAC ≤ ǫ we conclude that we have λ ( n ) MAC △ = max m ,m λ m ,m MAC ≤ ǫ, (48) for any ǫ > and sufficiently large n , provided that (35), (36) and (37) hold, which concludes the MAC part.Following similar arguments as in Section III-A and using (45) and (48), we conclude that λ ( n ) ≤ ǫ (26 − ǫ ) , (49)for any < ǫ < , where λ ( n ) is as defined in (6). Since ǫ may be arbitrarily small, (49) concludes the proof.
2) Converse:
In this section, we prove the converse part of Theorem 4.1, in other words we have the following theorem:
Theorem 4.3:
For any sequence of (cid:16) nR ID , nR ID , nR Data , nR Data , n (cid:17) -RFID codes with P ( n ) e < ǫ , for any ǫ > , wehave (cid:0) R ID , R ID , R Data , R Data (cid:1) ∈ R . Proof:
Proof relies on ideas from [3] for BCC part and [1] for MAC part.First of all, we have following P ( n ) e = 1 − Pr (cid:16)h ( ˆ W , ˆ W ) = ( W , W ) i ∧ h ( ˆ M , hM ) = ( M , M ) i(cid:17) , = 1 − Pr (cid:16) ( ˆ W , ˆ W ) = ( W , W ) (cid:17) Pr (cid:16) ( ˆ M , ˆ M ) = ( M , M ) | ( ˆ W , ˆ W ) = ( W , W ) (cid:17) . (50)Using (50) and noting that P ( n ) e ≤ ǫ , we have (cid:16) − Pr (cid:16) ( ˆ W , ˆ W ) = ( W , W ) (cid:17)(cid:17) (cid:16) Pr (cid:16) ( ˆ M , ˆ M ) = ( M , M ) | ( ˆ W , ˆ W ) = ( W , W ) (cid:17)(cid:17) , which implies P ( n ) e,BCC = Pr (cid:16) ( ˆ W , ˆ W ) = ( W , W ) (cid:17) ≤ ǫ, (51)and P ( n ) e,MAC = Pr (cid:16) ( ˆ M , ˆ M ) = ( M , M ) | ( ˆ W , ˆ W ) = ( W , W ) (cid:17) ≤ ǫ, (52)Next, (51) enables us to use the result of [3] for BCC case, hence we state that R ID ≤
12 log (cid:18) αPN (cid:19) , (53) R ID ≤
12 log (cid:18) − α ) PαP + N (cid:19) , (54)for any ≤ α ≤ .Further, (52) enables us to use the result of [1] for MAC case, hence we state that R Data ≤
12 log (cid:18) α αPN (cid:19) , (55) R Data ≤
12 log (cid:18) α (1 − α ) PN (cid:19) , (56) R Data + R Data ≤
12 log (cid:18) α αP + α (1 − α ) PN (cid:19) . (57) Combining (53), (54), (55), (56) and (57) we conclude that for any (cid:16) nR ID , nR ID , nR Data , nR Data , n (cid:17) -RFID codes with P ne , we have (cid:0) R ID , R ID , R Data , R Data (cid:1) ∈ R , which concludes the proof.V. C ONCLUSION
In this paper, we studied the RFID capacity problem by modeling the underlying structure as a specific multiuser communica-tion system that is represented by a cascade of a BCC and a MAC. The BCC and MAC parts are used to model communicationbetween the RFID reader and the mobile units, and between the mobile units and the RFID reader, respectively. To connectthe BCC and MAC parts, we used a “nested codebook” structure. We further introduced imperfection channels for discretealphabet case as well as additional power limitations for continuous alphabet additive Gaussian noise case to accurately modelthe physical medium of the RFID system. We provided the achievable rate region in the discrete alphabet case and the capacityregion for the continuous alphabet additive Gaussian noise case. Hence, overall, we characterized the maximal achievable errorfree communication rates for any RFID protocol for the latter case.R
EFERENCES[1] T. M. Cover and J. A. Thomas,
Elements of Information Theory , 2nd Edition, New York: Wiley, 2006.[2] A. El Gamal and E. C. Van der Meulen, “A Proof of Martons Coding Theorem for the Discrete Memoryless Broadcast Channels”,
IEEE Trans. Inf.Theory, vol. IT–27, no. 1, pp. 120–122, January 1981.[3] P. R Bergmans, “A Simple Converse for Broadcast Channels with Additive White Gaussian Noise”,
IEE Trans. Inf. Theory, vol. IT–20, no. 2, pp. 279–280,March 1974.[4] M. Medard, J. Huang, A. J. Goldsmith, S. P. Meyn, T. P. Coleman, “Capacity of Time-slotted ALOHA Packetized Multiple-Access Systems over theAWGN Channel,”
IEEE Trans. Inf. Theory, vol. IT–3, no. 2, pp. 486–499, March 2004.[5] R. Zamir, S. Shamai, U. Erez, “Nested Linear/Lattice Codes for Structured Multiterminal Binning,”
IEEE Trans. Inf. Theory, vol. IT–48, no. 6, pp. 1250–1276, June 2002.[6] R. J. Barron, B. Chen, G. W. Wornell, “The Duality Between Information Embedding and Source Coding With Side Information and Some Applications,”