Gaussian Sensor Networks with Adversarial Nodes
GGaussian Sensor Networks with Adversarial Nodes
Emrah Akyol, Kenneth Rose Tamer Bas¸ar { eakyol, rose } @ece.ucsb.edu [email protected], Santa Barbara University of Illinois at Urbana-Champaign Abstract —This paper studies a particular sensor networkmodel which involves one single Gaussian source observed bymany sensors, subject to additive independent Gaussian obser-vation noise. Sensors communicate with the receiver over an ad-ditive Gaussian multiple access channel. The aim of the receiveris to reconstruct the underlying source with minimum meansquared error. The scenario of interest here is one where some ofthe sensors act as adversary (jammer): they strive to maximizedistortion. We show that the ability of transmitter sensors tosecretly agree on a random event, that is “coordination”, plays akey role in the analysis. Depending on the coordination capabilityof sensors and the receiver, we consider two problem settings. Thefirst setting involves transmitters with “coordination” capabilitiesin the sense that all transmitters can use identical realizationof randomized encoding for each transmission. In this case,the optimal strategy for the adversary sensors also requirescoordination, where they all generate the same realization ofindependent and identically distributed Gaussian noise. In thesecond setting, the transmitter sensors are restricted to use fixed,deterministic encoders and this setting, which corresponds to aStackelberg game, does not admit a saddle-point solution. Weshow that the the optimal strategy for all sensors is uncodedcommunications where encoding functions of adversaries andtransmitters are in opposite directions. For both settings, digitalcompression and communication is strictly suboptimal.
Index Terms —Sensor networks, game theory, uncoded com-munication, analog mappings, coordinated transmission
I. I
NTRODUCTION
Distributed compression and communication over sensornetworks has been an important problem, see e.g. [1] foran overview. Joint source-channel coding (JSCC) has certainadvantages over separate source and channel coding, andseveral specific aspects of JSCC over sensor networks havebeen studied in previous work; see e.g., [2] and the referencestherein. In this paper, we extend the game theoretic analysis ofthe Gaussian test channel [3]–[6] to Gaussian sensor networksintroduced by [7]. A particular extension of [7] to asymmetricsensor networks was studied in [8], [9]. Communication gameswithin general multiple input-multiple output settings wereconsidered in [10], [11].In this paper, we consider two settings for the sensornetwork model illustrated in Figure 1 and explained in detailin Section II. The first M sensors (i.e., the transmitters) andthe receiver constitute Player 1 (minimizer) and the remaining K sensors (i.e., the adversaries) constitute Player 2 (maxi-mizer). This zero-sum game does not admit a saddle-point inpure strategies (fixed encoding functions), but admits one inmixed strategies (randomized functions). In the first setting, This work was supported in part by the NSF under grants CCF-1016861,CCF-1118075 and CCF-1111342. the transmitter sensors can use randomized encoders, i.e., alltransmitters and the receiver agree on some (pseudo)randomsequence, denoted as { γ } in the paper. We coin the term“coordination” for this capability and show that it has apivotal role in the analysis and the implementation of optimalstrategies for both the transmitter and adversarial sensors. Inthe first setting, we consider the more general case of mixedstrategies and present the saddle-point solution in Theorem 1.In the second setting, encoding functions of transmitters arelimited to the fixed mappings. This setting can be viewed asa Stackelberg game where Player 1 is the leader, restrictedto pure strategies, and Player 2 is the follower, who observesPlayer 1’s choice of pure strategies and plays accordingly. Wepresent in Theorem 2 the optimal strategies for this Stackelberggame, whose cost is strictly higher than the cost associatedwith the first setting. The sharp contrast between the twosettings underlines the importance of “coordination” in sensornetworks with adversarial nodes.II. P ROBLEM D EFINITION
In general, lowercase letters (e.g., x ) denote scalars, bold-face lowercase (e.g., x ) vectors, uppercase (e.g., U, X ) matri-ces and random variables, and boldface uppercase (e.g., X )random vectors. E ( · ) , P ( · ) and R denote the expectation andprobability operators, and the set of real numbers respectively. Bern ( p ) denotes the Bernoulli random variable, taking values with probability p and − with − p . Gaussian distributionwith mean µ and covariance matrix R is denoted as N ( µ , R ) .The sensor network model is illustrated in Figure 1. Theunderlying source { S ( i ) } is a sequence of i.i.d. real valuedGaussian random variables with zero mean and variance σ S .Sensor m ∈ [1 : M + K ] observes a sequence { U m ( i ) } definedas U m ( i ) = S ( i ) + W m ( i ) , (1)where { W m ( i ) } is a sequence of i.i.d. Gaussian randomvariables with zero mean and variance σ W m , independentof { S ( i ) } . Sensor m ∈ [1 : M + K ] can apply arbitraryBorel measurable function g Nm : R N → R to the observationsequence of length N , U m so as to generate sequence ofchannel inputs X m ( i ) = g Nm ( U m ) under power constraint: lim N →∞ N N (cid:88) i =1 E { X m ( i ) } ≤ P m (2) a r X i v : . [ c s . I T ] M a y ⊕⊕⊕ ⊕ SENSOR M +1SENSOR M + K SENSOR M SENSOR 1 ...... S ˆ S W W M W M +1 W M + K Z ...... U U M U M + K U M +1 X M +1 X M + K X M X Y REC. (TRANSMITTER)(TRANSMITTER)(ADVERSARY)(ADVERSARY)
Fig. 1. The sensor network model.
The channel output is then given as Y ( i ) = Z ( i ) + M + K (cid:88) j =1 X j ( i ) (3)where { Z ( i ) } is a sequence of i.i.d. Gaussian random variablesof zero mean and variance σ Z , independent of { S ( i ) } and { W m ( i ) } . The receiver applies a Borel measurable function h N : R N → R to the received sequence { Y ( i ) } to minimizethe cost, which is measured as mean squared error (MSE)between the underlying source S and the estimate at thereceiver ˆ S as J ( g Nm ( · ) , h N ( · )) = lim N →∞ N N (cid:88) i =1 E { ( S ( i ) − ˆ S ( i )) } (4)for m = 1 , , . . . , M + K .The transmitters g Nm ( · ) for m ∈ [1 : M ] and the receiver h N ( · ) seek to minimize the cost (4) while the adversaries aimto maximize (4) by properly choosing g Nk ( · ) for k ∈ [ M +1 : M + K ] . We focus on the symmetric sensor and symmetricsource where P m = P T and σ W m = σ W T , ∀ m ∈ [1 : M ] and σ W k = σ W T and P k = P A , ∀ k ∈ [ M + 1 : M + K ] .A transmitter-receiver-adversarial policy ( g N ∗ m , g N ∗ k , h N ∗ )constitutes a saddle-point solution if it satisfies the pair ofinequalities J ( g N ∗ m , g Nk , h N ) ≤ J ( g N ∗ m , g N ∗ k , h N ∗ ) ≤ J ( g Nm , g N ∗ k , h N ) (5)III. P ROBLEM S ETTING
IThe first scenario is concerned with the setting where thetransmitter sensors have the ability to coordinate , i.e., alltransmitters and the receiver can agree on an i.i.d. sequenceof random variables { γ ( i ) } generated, for example, by a sidechannel, the output of which is, however, not available tothe adversarial sensors . The ability of coordination allowstransmitters and the receiver to agree on randomized encoding An alternative practical method to coordinate is to generate the identicalpseudo-random numbers at each sensor, based on pre-determined seed. mappings. Surprisingly, in this setting, the adversarial sensorsalso need to coordinate, i.e., agree on an i.i.d. random se-quence, denoted as { θ ( i ) } , to generate the optimal jammingstrategy.The saddle-point solution of this problem is presented inthe following theorem. Theorem 1:
Under scenario 1, the optimal encoding func-tion for the transmitters is randomized uncoded transmission: X m ( i ) = γ ( i ) α T U m ( i ) , M ≥ m ≥ (6)where γ ( i ) is i.i.d. Bernoulli ( ) over the alphabet {− , } γ ( i ) ∼ Bern ( 12 ) . The optimal jamming function (for adversarial sensors) is togenerate i.i.d. Gaussian output X k ( i ) = θ ( i ) , M + K ≥ k ≥ M + 1 where θ ( i ) ∼ N (0 , P A ) , and is independent of the adversarial sensor input U k ( i ) . Theoptimal receiver is the Bayesian estimator of S given Y , i.e., h ( Y ( i )) = M α T σ S M α T σ S + M α T σ W T + K P A + σ Z Y ( i ) . (7)Cost at this saddle-point is J = σ S M α T σ W T + K P A + σ Z M α T σ S + M α T σ W T + K P A + σ Z (8)where α T = (cid:113) P T σ S + σ WT . Proof:
We prove this result by verifying that the mappingsin this theorem satisfy the saddle-point criterion given in (5),following the approach in [5].RHS of (5): Suppose the policy of the adversarial sensorsis given as in Theorem 1. Then, the communication systemat hand becomes identical to the problem considered in [7],whose solution is uncoded communication with deterministic,linear encoders, i.e., X m ( i ) = α T U m ( i ) . Any probabilisticencoder, given in the form of (6) (irrespective of the densityof γ ) yield the same cost (8) with deterministic encoders andhence is optimal.LHS of (5): Note that all the adversarial sensors must usethe same jamming strategy to maximize the overall cost. Letus derive the overall cost conditioned on the realization ofthe transmitter mappings (i.e., γ = 1 and γ = − ) used inconjunction with optimal linear decoders given in (7). If γ = 1 D = J + ξ E { SX k } + ψ E { ZX k } (9)for some constants ξ, ψ , and similarly if γ = − D = J − ξ E { SX k } − ψ E { ZX k } (10)where the overall cost is D ( i ) = P ( γ ( i ) = 1) D + P ( γ ( i ) = − D . (11)learly, for γ ( i ) ∼ Bern ( ) the overall cost J is onlya function of the second-order statistics of the adversarialoutputs, irrespective of the distribution of { θ ( i ) } , and hencethe solution presented here is indeed a saddle-point. Corollary 1:
The solution in Theorem 1 is (almost surely)the unique solution for the transmitters, the adversaries (jam-mer) and the receiver.
Proof:
We start by restating the fact that the optimalsolution for transmitter sensors must be in the randomizedform given in (6). Let us prove the properties which were notcovered by the proof of the saddle-point.Gaussianity of { θ ( i ) } : The choice θ ( i ) ∼ N (0 , P A ) maxi-mizes (4) since it renders the simple uncoded linear mappingsasymptotically optimal, i.e., the transmitters cannot improveon the zero-delay performance by utilizing asymptoticallyhigh delays. Moreover, the optimal zero-delay performanceis always lower bounded by the performance of the linearmappings, which is imposed by the adversarial choice of θ ( i ) ∼ N (0 , P A ) .Independence of { θ ( i ) } of { S ( i ) } and { W ( i ) } : If the adver-sarial sensors introduce some correlation, i.e., if E { SX k } (cid:54) = 0 or E { W X k } (cid:54) = 0 , the transmitter can adjust its Bernoulliparameter to decrease the distortion. Hence, the optimal ad-versarial strategy is setting E { SX k } = E { W X k } = 0 whichimplies independence since all variables are jointly Gaussian.Choice of Bernoulli parameter: Note that the optimal choiceof the Bernoulli parameter for the transmitters is since otherchoices will not cancel the cross terms in (9) and (10), i.e., E { SX k } and E { W X k } . These cross terms can be exploitedby the adversary to increase the cost, hence optimal strategyfor transmitter is to set γ = Bern (1 / . Corollary 2:
Coordination is essential for adversarial sen-sors in the case of coordinating transmitters and receiver, in thesense that lack of adversarial coordination strictly decreasesthe overall cost.
Proof:
Note that coordination enables adversarial sensorsto create a Gaussian noise with variance K P A yieldingthe cost in (8). However, without coordination, each sensorcan only generate independent Gaussian random variables,yielding an overall Gaussian noise with variance KP A andthe total cost J = σ S M α T σ W T + KP A + σ Z M α T σ S + M α T σ W T + KP A + σ Z < J (12)Hence, coordination of adversarial sensors strictly increasesthe overall cost, i.e., coordination is essential for adversarialsensors in this setting. Remark 1:
Note that the optimality of this jamming strat-egy does not depend on the “symmetry” assumption for theadversaries. Hence, it is straightforward to show that in themore general setting of σ W k (cid:54) = σ W k and P k (cid:54) = P k , for ( k , k ) ∈ [ M + 1 : M + K ] our results hold. We do notinclude such generalizations for brevity. Remark 2:
We note that the optimal strategies do not de-pend on the sensor index m , hence the implementation of the optimal strategy, for both transmitter and adversarial sensors,requires “coordination” among the sensors. This highlightsthe need for coordination in game theoretic settings in sensornetworks. Note that this coordination requirement arises purelyfrom the game theoretic considerations, i.e., the presence ofadversarial sensors. In the case where no adversarial nodeexists, transmitters do not need to “coordinate”. Moreover,as we will show in Theorem 2 if the transmitters cannotcoordinate, then adversarial sensors do not need to coordinate.IV. P ROBLEM S ETTING
IIIn this section, we focus on the problem, where the trans-mitters do not have the ability to secretly agree on a “coordina-tion” random variable to generate their transmission function X k . In this case, it turns out that the optimal transmitter strat-egy, which is almost surely unique, is uncoded transmissionwith linear mappings, while the adversarial optimal strategyfor the (jamming) sensors is uncoded, linear mappings withthe opposite sign of the transmitter functions.The following theorem captures this result. Theorem 2:
Under scenario 2, the optimal encoding func-tion for the transmitters is uncoded transmission, i.e., X m ( i ) = α T U m ( i ) , M ≥ m ≥ The optimal jamming function (for adversarial sensors) isuncoded transmission with the opposite sign of the transmit-ters, i.e., X k ( i ) = α A U k ( i ) , M + K ≥ k ≥ M + 1 The optimal decoding function is the Bayesian estimator of S given Y , i.e., h ( Y ( i )) = (cid:2) ( M α T + Kα A ) σ S (cid:3) Y ( i )( M α T + Kα A ) σ S + M α T σ W T + K α A σ W A + σ Z . Cost at this saddle-point is J = σ S M α T σ W T + K α A σ W A + σ Z ( M α T + Kα A ) σ S + M α T σ W T + K α A σ W A + σ Z (13)where α T = (cid:113) P T σ S + σ WT and α A = − (cid:113) P A σ S + σ WA . Proof:
First, we note that adversarial sensors have theknowledge of the transmitter encoding functions, and hencethe adversarial encoding functions will be in the same form asthe transmitters functions but with a negative sign i.e., g A ( · ) = − (cid:113) P A P T g T ( · ) since outputs are sent over an additive channel(see e.g., [5], [6] for a proof of this result). We next proceed tofind the optimal encoding functions for the transmitters, given g A ( · ) = − (cid:113) P A P T g T ( · ) . From the data processing theorem, wemust have I ( U , U , . . . , U M + K ; ˆ S ) ≤ I ( X , X , . . . , X M + K ; Y ) (14)The left hand side can be lower bounded as: I ( U , U , . . . , U M + K ; ˆ S ) ≥ R ( D ) (15)here R ( D ) is derived in Appendix A. The right hand sidecan be upper bounded by I ( X , X , ..., X M + K ; Y ) (16) (a) ≤ N (cid:88) i =1 I ( X ( i ) , . . . , X M + K ( i ); Y ( i )) (17) ≤ max N (cid:88) i =1 I ( X ( i ) , . . . , X M + K ( i ); Y ( i )) (18) = 12 N (cid:88) i =1 log(1 + 1 σ Z T R X ( i ) ) (19)where R X ( i ) is defined as { R X ( i ) } p,r (cid:44) E { X p ( i ) X r ( i ) } ∀ p, r ∈ [1 : M + K ] . (20)Note that (a) follows from the memoryless property of thechannel and the maximum in (18) is over the joint densityover X ( i ) , . . . , X M + K ( i ) given the structural constraints on R X ( i ) due to the power constraints and also the fact that g A ( · ) = − (cid:113) P A P T g T ( · ) . It is well known that the maximumis achieved by the jointly Gaussian density for a given fixedcovariance structure [12], yielding (19). Since logarithm isa monotonically increasing function, hence the optimal en-coding functions g Nm ( · ) , m ∈ [1 : M ] equivalently maximize (cid:80) p,r E { X p ( i ) X r ( i ) } . Note that X m ( i ) = g Nm ( U m ) (21)and hence g Nm ( · ) , m ∈ [1 : M ] that maximize p = M + K (cid:88) p =1 r = M + K (cid:88) r =1 E { g Np ( U p ) g Nr ( U r ) } (22)can be found by invoking Witsenhausen’s lemma (given in Ap-pendix B) as g Nm ( U m ) = α N U m where α N = [ α T , . . . , α T ] .Finally, we obtain J as an outer bound by equating the leftand right hand sides of (14). The linear mappings in Theorem2 achieve this outer bound and hence are optimal. Corollary 3:
Source-channel separation, based on digitalcompression and communications is strictly suboptimal forthis setting.
Proof:
We first note that the optimal adversarial encodingfunctions must be the negative of that of the transmitters toachieve the saddle-point solution derived in Theorem 2. Butthen, the problem at hand becomes equivalent to a problemwith no adversary which was studied in [2], where source-channel separation was shown to be strictly suboptimal. Hence,separate source-channel coding has to be suboptimal for ourproblem. A more direct proof follows from the calculation ofthe separate source-channel coding performance.
Corollary 4:
Coordination is essential for transmitter sen-sors, in the sense that lack of coordination strictly increasesthe overall cost.
Proof:
Proof follows from the fact that J < J . V. D ISCUSSION AND C ONCLUSION
This paper is an initial attempt to analyze game theo-retic source-channel coding within sensor networks. We haveconducted a game-theoretical analysis of a specific Gaussiansensor network, specialized to symmetric transmitter and ad-versarial sensors. Depending on the coordination capabilitiesof the sensors, we have analyzed two problem settings. Thefirst setting allows coordination among the transmitter sensors.Coordination capability enables the transmitters to use ran-domized encoders. The saddle-point solution to this problemis randomized uncoded transmission for the transmitters andthe coordinated generation of i.i.d. Gaussian noise for theadversarial sensors. In the second setting, transmitter sensorscannot coordinate, and hence they use fixed, deterministicmappings. The solution to this problem is shown to be uncodedcommunication with linear mappings for both the transmitterand the adversarial sensors, but with opposite signs. We notethat coordination aspect of the problem is entirely due togame-theoretic considerations, i.e., if no adversarial sensorsexist, optimal transmitter encoding functions do not needcoordination.Our analysis has uncovered an interesting result regardingcoordination among transmitter nodes, and among adversarialnodes. If transmitter nodes can coordinate, then so must theadversaries, i.e., all must generate the identical realizationof an i.i.d. Gaussian noise sequence. If transmitters cannotcoordinate, adversarial sensors do not need to coordinate, andthis saddle-point is at strictly higher cost than the one whentransmitters can coordinate.Several questions still remain open and are currently underinvestigation, including extensions of the analysis to vectorsources and channels, which require a vector form of Wit-senhausen’s Lemma, an important research question in itsown right; the asymptotic (in the number of sensors M and K ) analysis of the results presented here; and extension ofour analysis to asymmetric sensor networks and non-Gaussiansettings. A PPENDIX AT HE G AUSSIAN
CEO P
ROBLEM
In the Gaussian CEO problem, an underlying Gaussiansource S ∼ N (0 , σ S ) is observed under additive noise W ∼ N ( , R W ) as U = S + W . These noisy observations,i.e., U , must be encoded in such a way that the decoderproduces a good approximation to the original underlyingsource. This problem was proposed in [13] and solved in [14](see also [15], [16]). A lower bound for this function for thenon-Gaussian sources within the “symmetric” setting whereall U ’s have identical statistics was presented in [17]. Here,we simply extend the results in [15] to asymmetric settings,following the approach in [17], focusing on MSE distortionmeasure D = E { ( S − ˆ S ) } (23)and rate R = min I ( U , ˆ S ) (24)here U = S + W , W ∼ N ( , R W ) , and R W is an ( M + K ) × ( M + K ) diagonal matrix where the first M diagonal entries are σ W T and the remaining K are σ W A . Theminimization in (24) is over all conditional densities p (ˆ s | u ) that satisfy (23). Distortion can be written as sum of two terms D = E { ( S − T + T − ˆ S ) } , (25) = E { ( S − T ) } + E { ( T − ˆ S ) } , (26)where T (cid:44) E { S | U } . Note that (26) holds since E { ( S − T )( ˆ S − T ) } = 0 , (27)as the MMSE error is orthogonal to any function of theobservation, U . The first term in (26), i.e., the estimation error D est (cid:44) E { ( S − T ) } is constant with respect to p (ˆ s | u ) , i.e.,a fixed function of U and S . Hence, the minimization is overthe densities that satisfy a distortion constraint of the form E { ( T − ˆ S ) } ≤ D rd and R = min I ( U ; ˆ S ) . Hence, we write(26) as D = D rd + D est . (28)Note that due to their Gaussianity, T is sufficient statistics of U for S , i.e., S − T − U forms a Markov chain in that orderand T ∼ N (0 , σ T ) . Hence, R = min I ( U ; ˆ S ) = min I ( T ; ˆ S ) where minimization is over p (ˆ s | t ) that satisfy E { ( T − ˆ S ) } ≤ D rd , where all variables are Gaussian. This is the classicalGaussian rate distortion problem, and hence: R = 12 log( σ T /D rd ) . (29)Noting that T = E { S U ∗ } ( E { U U ∗ } ) − U , (30)and using standard linear estimation principles, we obtain σ T = σ S
11 + σ WA σ WT σ S (cid:16) Kσ WT + Mσ WA (cid:17) , (31)and D est = σ S σ W T σ W A Kσ W T + M σ W A + σ W T σ W A . (32)A PPENDIX BW ITSENHAUSEN ’ S L EMMA
In this section, we recall Witsenhausen’s lemma [18], whichis used in the proof of Theorem 2.
Lemma 1:
Consider two sequences of i.i.d. random vari-ables X ( i ) and Y ( i ) , generated from a joint density P X,Y ,and two (Borel measurable) arbitrary functions f, g : R → R satisfying E { f ( X ) } = E { g ( Y ) } =0 , (33) E { f ( X ) } = E { g ( Y ) } =1 . (34)Define ρ ∗ (cid:44) sup f,g E { f ( X ) g ( Y ) } (35) Note that ˆ S is also a deterministic function of U , since optimal recon-struction can always be achieved by deterministic codes. Then for any (Borel measurable) functions f N , g N : R N → R satisfying E { f N ( X ) } = E { g N ( Y ) } = 0 , (36) E { f N ( X ) } = E { g N ( Y ) } = 1 , (37)for length N vectors X and Y , we have sup f N ,g N E { f N ( X ) g N ( Y ) } ≤ ρ ∗ . (38)Moreover, the supremum above is attained by linear mappings,if P X,Y is a bivariate normal density.R
EFERENCES[1] Z. Xiong, A.D. Liveris, and S. Cheng, “Distributed source coding forsensor networks,”
IEEE Signal Processing Magazine , vol. 21, no. 5, pp.80–94, 2004.[2] M. Gastpar and M. Vetterli, “Power, spatio-temporal bandwidth, anddistortion in large sensor networks,”
IEEE Journal on Selected Areas inCommunications , vol. 23, no. 4, pp. 745–754, 2005.[3] T. Bas¸ar, “The Gaussian test channel with an intelligent jammer,”
IEEETrans. on Inf. Th. , vol. 29, no. 1, pp. 152–157, 1983.[4] T. Bas¸ar and Y.W. Wu, “A complete characterization of minimax andmaximin encoder-decoder policies for communication channels withincomplete statistical description,”
IEEE Trans. on Inf. Th., , vol. 31,no. 4, pp. 482–489, 1985.[5] T. Bas¸ar and Y.W. Wu, “Solutions to a class of minimax decisionproblems arising in communication systems,”
Journal of OptimizationTheory and Applications , vol. 51, no. 3, pp. 375–404, 1986.[6] R. Bansal and T. Bas¸ar, “Communication games with partially softpower constraints,”
Journal of Optimization Theory and Applications ,vol. 61, no. 3, pp. 329–346, 1989.[7] M. Gastpar, “Uncoded transmission is exactly optimal for a simpleGaussian sensor network,”
IEEE Trans. on Inf. Th. , vol. 54, no. 11, pp.5247–5251, 2008.[8] H. Behroozi, F. Alajaji, and T. Linder, “On the optimal power-distortionregion for asymmetric Gaussian sensor networks with fading,” in
IEEEInternational Symposium on Inf. Th.
IEEE, 2008, pp. 1538–1542.[9] H. Behroozi, F. Alajaji, and T. Linder, “On the optimal performancein asymmetric Gaussian wireless sensor networks with fading,”
IEEETrans. on Signal Processing , vol. 58, no. 4, pp. 2436–2441, 2010.[10] A. Kashyap, T. Bas¸ar, and R. Srikant, “Correlated jamming on MIMOGaussian fading channels,”
IEEE Trans. on Inf. Th. , vol. 50, no. 9, pp.2119–2123, 2004.[11] S. Shafiee and S. Ulukus¸, “Mutual information games in multiuserchannels with correlated jamming,”
IEEE Trans. on Inf. Th. , vol. 55,no. 10, pp. 4598–4607, 2009.[12] S.N. Diggavi and T.M. Cover, “The worst additive noise under acovariance constraint,”
IEEE Trans. on Inf. Th. , vol. 47, no. 7, pp.3072–3081, 2001.[13] H. Viswanathan and T. Berger, “The quadratic Gaussian CEO problem,”
IEEE Trans. on Inf. Th. , vol. 43, no. 5, pp. 1549–1559, 1997.[14] Y. Oohama, “Rate-distortion theory for Gaussian multiterminal sourcecoding systems with several side informations at the decoder,”
IEEETrans. on Inf. Th. , vol. 51, no. 7, pp. 2577–2593, 2005.[15] Y. Oohama, “The rate-distortion function for the quadratic GaussianCEO problem,”
IEEE Trans. on Inf. Th. , vol. 44, no. 3, pp. 1057–1070,1998.[16] V. Prabhakaran, D. Tse, and K. Ramachandran, “Rate region of thequadratic Gaussian CEO problem,” in
Proceedings of the InternationalSymposium on Inf. Th.
IEEE, 2004, p. 119.[17] M. Gastpar, “A lower bound to the AWGN remote rate-distortionfunction,” in
IEEE 13th Workshop on Statistical Signal Processing .IEEE, 2005, pp. 1176–1181.[18] H.S. Witsenhausen, “On sequences of pairs of dependent randomvariables,”