Multi-Agent Consensus Subject to Communication and Privacy Constraints
aa r X i v : . [ ee ss . S Y ] F e b Multi-Agent Consensus Subject to Communicationand Privacy Constraints
Dipankar Maity and Panagiotis Tsiotras
Abstract —We consider a multi-agent consensus problem in thepresence of adversarial agents. The adversaries are able to listento the inter-agent communications and try to estimate the state ofthe agents. The agents have a limited bit-rate for communicationand are required to quantize the transmitted signal in orderto meet the bit-rate constraint of the communication channel.We propose a consensus protocol that is protected against theadversaries, i.e., the expected mean-square error of the adversarystate estimate is lower bounded. In order to deal with the bit-rate constraint, we propose a dynamic quantization scheme thatguarantees protected consensus . Index Terms —Adversarial agents, eavesdropping, limited com-munication, privacy, protected consensus.
I. I
NTRODUCTION
Consensus protocols, designed to ensure that all agents ina network come to an agreement, have been widely usedin multi-robot systems [1], [2], satellite altitude alignment[3], [4], automated highway systems [5], [6], and severalcivilian and military applications, including target tracking andsource localization [7], [8]. Beyond multi-agent systems inrobotics, these problems also have a long history in the fieldof distributed computation [9], [10].A typical consensus protocol [11] consists of two com-ponents that are executed repeatedly. The first componentdefines an inter-agent communication scheme for each agentto send/receive data (typically the state of that agent) to/fromother agents in the network. The second component definesa computation scheme executed locally by each agent uponreceipt of the data to update its own state. These two processesare repeated until the states of all the agents become the same.In order for the agents to reach consensus, each agent followsthe designed protocol including perfect transmission of itsstate. In realistic scenarios, perfect transmission of the datais difficult to achieve due to the limited data-rate (bit-rate)of the communication channel. As the number of agents inthe network grows, the available bit-rate per agent decreases.Therefore, the bit-rate constraint should be incorporated ex-plicitly when designing and analysing consensus protocols forlarge-scale systems with dense communication architecture.Privacy is important when the agents of the network dealwith sensitive data. Recent technological advancements havemade it possible to eavesdrop inter-agent communications andestimate the state of the network. Such an eavesdropper may
D. Maity is an Assistant Professor in the Electrical and Computer Engi-neering department at the University of North Carolina Charlotte, NC, 28223,USA. Email: [email protected]
P. Tsiotras is the David and Andrew Lewis Chair Professor with theGuggenheim School of Aerospace Engineering and the Institute for Roboticsand Intelligent Machines, Georgia Institute of Technology, Atlanta, GA,30332-0150, USA. Email: [email protected] not be a part of the network, and may remain undetected, whileperfectly acquiring sensitive information regarding the agents.This provides an incentive for the agents to only considerconsensus mechanisms that do not require them to share theirtrue state values with the rest of the network.
A. Previous Work
To deal with limited bit-rate, quantized consensus protocolshave been proposed. In [12] the authors studied a consensusproblem with quantization and time-delay and they showedthat the states of the agents asymptotically converge to a practical consensus set . In [13] the authors considered bothquantization and event-triggered control to reduce the useof communication resources. The work of [14] showed thatrandomized gossip algorithms can solve the average consen-sus problem with quantized communication. A distributedalgorithm in which the agents utilize probabilistically quan-tized information, i.e., dithered quantization, to communicatewith each other is presented in [15]. A dynamic quantizationscheme is presented in [16] that exploits the increasing corre-lation between the exchanged values by the agents to achieveconsensus.In order to achieve privacy, consensus algorithms have beenproposed so that the states are masked or encrypted [17] priorto being transmitted over the network. In [18], for example,the authors propose a privacy-preserving average consensusalgorithm to guarantee the privacy of the initial state alongwith guarantee of asymptotic consensus of the agents tothe average of the initial values, by adding and subtractingrandom noise to the consensus process. A dynamic approachis presented in [19] to achieve consensus, while ensuring thatthe agents do not reveal their initial states. In [20] and [21]the authors developed distributed privacy-preserving consensusalgorithms that enable the agents to asymptotically reachconsensus without having to reveal their initial conditions.These methods are designed mainly for average consensus protocols where agents reach consensus at a value equal tothe arithmetic average of their initial states. B. Motivation and Contribution
In all existing approaches [18]–[21] the focus has been onthe privacy of the agents’ initial states. However, the states ofthe agents during the consensus process is also important andshould also be protected. Furthermore, if the eavesdroppers In practical consensus the states do not converge to a common value, ratherthey all remain within a small bounded domain. In average consensus each agent converges to a value that is the empiricalaverage of all the agents’ initial values. are somehow able to access the parameters of the encodingprocess that mask the state values, they will be able toperfectly decode the states. In addition, previous methodsdo not consider communication constraints while designingprivacy-preserving protocols. Due to the limited bit-rate, ifthe transmitted messages are quantized to meet this limitedbit-rate constraint, there is no guarantee that these methodswill ensure that the agents reach consensus.In this article, we expand on previous works on adversarialconsensus and consider the case where adversarial agents thatdo not participate in the consensus objective are able to eaves-drop into the inter-agent communications in an effort to esti-mate the states of the agents. The success of the eavesdroppingmechanism is assumed to be uncertain, that is, an adversarycan intercept a transmission at any time with probability lessthan 1. This adversarial model is different from other typesof adversarial models where adversaries participate in theconsensus process and purposefully injects false data to affectthe outcome of the consensus. It has been shown that in certaincases the agents can detect these malicious nodes (adversaries)in the network and disconnect such nodes from the consensusgraph. Once the adversaries are identified and removed, theycannot further influence the consensus outcome [22], [23]. Incontrast, the agents in our eavesdropping adversarial settingcannot exclude the adversaries from eavesdropping. In fact,the agents might not even know whether such eavesdroppingadversaries are present or not. Therefore, the agents of theconsensus network must follow a consensus protocol whichensures that the adversaries are not able to estimate their states.That is, their true state values must be protected from beingrevealed to the adversaries at all times.The protection (privacy) metric in this paper is defined interms of the estimation error of the agents’ states by theadversaries. The objective of the agents is then to design aconsensus algorithm that will ensure protection of their statesagainst adversarial eavesdropping. Furthermore, due to the bit-rate constraint of the underlying communication channel, thedeveloped consensus protocol must be designed to achieveconsensus even under finite data-rate constraints. Protectionin the proposed consensus protocol is achieved by not trans-mitting the state of the agent but rather by transmitting an innovation signal that contains only the information requiredfor the agents to reconstruct the state of their neighboringagents, which, however, is not useful to an eavesdroppingadversary to achieve the same, unless it has perfect, uninter-rupted interception capabilities. If, at any instant of time, theadversary fails to intercept the transmitted signal, it will beincapable to reconstruct the state of the agent for any futuretime.Encryption-based techniques that provide protection in thepresence of adversaries have also been reported in [24],[25]. However, encryption-based approaches are generallycomputationally more expensive and, importantly, they requireadditional bandwidth to implement since encryption requiresa larger number of bits. However, if desired, one can alwaysadd an extra layer of protection by further encrypting theinnovation signals after quantizing them and prior to transmis-sion. In that context, the proposed approach can be viewed ascomplementary to these encryption-based approaches. The major contributions of this work are: First, we showthat the standard consensus protocol [26]–[30] is not protected in the presence of adversarial agents. Second, we propose aninnovation-communication based consensus (ICC) algorithm,and formally show that the proposed algorithm is ǫ - protected in the presence of adversaries. That is, the mean-square errorin the adversaries’ estimate of the agent states is at least ǫ times the true state values (see Definition 2). We also showthat the ICC algorithm does not affect the consensus value.Third, we consider the case where only a finite number ofbits is available for inter-agent communications, and deriveconditions on the bit-rate to ensure consensus. Fourth, wedesign dynamic quantizers to quantize the signals prior totransmission. Dynamic quantizers can ensure that each agentreaches consensus, whereas static quantizers, in general, onlyensure practical consensus . We show that, if the available bit-rate is greater than a certain threshold, then the bit-rate doesnot affect the ǫ -protection guaranteed by the ICC algorithm.Fifth, we show that the agents are able to reach consensusunder any data-rate greater than a certain threshold, whichsolely depends on the parameters of the consensus network.The rest of the paper is organized as follows: A summarybackground on distributed consensus is presented in Section II.The model of the adversary considered in this paper andthe notion of ǫ - protection are introduced in Section III. InSection IV, we formally show that the classical consensusalgorithm is unprotected. Next, in Section V, we develop aninnovation-based inter-agent communication scheme, and wepropose a new consensus protocol to achieve ǫ - protection . InSection VI, we show that the proposed consensus protocolalso guarantees consensus under a finite bit-rate constraintby utilizing suitably designed dynamic quantizers. Simulationresults are discussed in Section VII that corroborate thetheoretical analysis presented in this paper, and further revealsome additional desirable features of the proposed framework.Finally, the paper is concluded in Section VIII.To enhance the readability of the main results, all the proofsare presented in the Appendix.II. B ACKGROUND AND P RELIMINARIES
A. Classical Linear Consensus
Consider a multi-agent system with N agents, where theconnectivity amongst the agents is dictated by a strongly con-nected digraph (directed graph) G ( V , E ) with V = { , . . . , N } representing the set of agents, and with E ⊆ V ×V representingthe inter-agent connections such that E ij , ( i, j ) ∈ E ifand only if agent i is an incoming neighbor of agent j . Theincoming neighbors of agent i are those agents that sendinformation to agent i . The incoming neighbor set of agent i is denoted by N i , { j | E ji ∈ E} ⊆ V . The outgoing neighbor set N i , { j | E ij ∈ E} denotes the set of agentsthat receive information from agent i . For an undirected graph, E ij ∈ E if and only if E ji ∈ E and N i = N i . Throughoutthis paper, and unless stated otherwise, we will simply writeneighbor to denote an incoming neighbor. Definition 1.
For agent i , the edges { E ij } j ∈N i and { E ji } j ∈N i denote its outgoing links and its incoming links , respectively. Agent i sends its measurements through links E ij and receivesmeasurements through E ji .The agents follow the consensus dynamics [26]–[30] x i ( k + 1) = x i ( k ) + u i ( k ) , (1a) u i ( k ) = X j ∈N i κ ij ( x j ( k ) − x i ( k )) , (1b) x i ( k + 1) = x i ( k ) + X j ∈N i κ ij ( x j ( k ) − x i ( k )) , (1c)where x i ( k ) ∈ R d is the state of agent i at time k . Forsimplicity of the exposition, henceforth, we consider d = 1 .The gains κ ij ≥ in (1b) are chosen such that P j ∈N i κ ij ≤ holds for all i ∈ V . Agent j shares its state x j with its outgoingneighbors i ∈ N i to help agent i compute the signal u i thatis used to update the state x i in (1b)-(1c).The evolution of the states of all the agents x ( k ) , [ x i ( k ) , . . . , x N ( k )] T can then be represented as x ( k + 1) = Lx ( k ) , (2)where, for all i and j , the matrix L is defined as [ L ] ij = κ ij , j ∈ N i , − P j ∈N i κ ij , κ ii , j = i, , otherwise . By construction, the matrix L has an eigenvalue of withcorresponding eigenvector , [1 , . . . , T [31]. In order toproceed, let us discuss certain properties of L that will beused in the subsequent analysis. Theorem 1 ([11]) . If G is strongly connected, then theeigenvalues { λ , λ , . . . , λ N } of L have the property λ > | λ | ≥ · · · ≥ | λ N | . Theorem 2.
Let G be strongly connected. Then, there existsa matrix L and a vector w ℓ such that the matrix L can bewritten as follows L = w T ℓ + L , where w T ℓ = 1 and L = w T ℓ L = 0 . Proof.
See Appendix A for the proof.The vector w ℓ in Theorem 2 is the left eigenvector of L corresponding to the eigenvalue 1. Define now the scalar c ( k ) = w T ℓ x ( k ) . As a consequence of Theorem 2, we havethat c ( k + 1) = w T ℓ x ( k + 1) = w T ℓ Lx ( k ) = w T ℓ x ( k ) = c ( k ) .Therefore, c ( k ) is a conserved quantity under the consensusprotocol (1c), and the agents reach the final consensus value lim k →∞ x ( k ) , x ∞ = c (0) = ( w T ℓ x (0)) . (3)Let us define the matrix J , w T ℓ . Certain properties of thematrix J that will be useful in this paper are summarized inthe following proposition. Proposition 1.
For all k ∈ N ,(i) J is idempotent.(ii) L J = JL = 0 and LJ = J .(iii) ( L − J ) k = L k − J k . (iv) ( L − I ) L k = ( L − I )( L − J ) k .(v) ρ ( L − J ) = | λ | < , where ρ ( · ) denotes the spectralradius.(vi) lim k →∞ ( L − J ) k = 0 and lim k →∞ L k = J . Proof.
Please see Appendix B for the proof.
Corollary 1. If L is a doubly stochastic matrix, i.e., all theelements of L are non-negative and each row and each columnsums to , then w ℓ = (1 /N ) , J = (1 /N ) T , and x ∞ = P Ni =1 x i (0) /N . B. Consensus with Quantized Measurements
The agents participating in consensus communicate overa shared channel with a bit-rate of B bits per time-step.Therefore, the signals need to be encoded (quantized) priorto transmission to meet this bit-rate constraint. The availablebit-rate B is shared by all the agents. The total numberof messages sent over the network at any time instant is N , P Ni =1 |N i | . If each encoding requires b overhead bits,then each message can be encoded into b , ⌊B / N ⌋ − b bits. One of the objectives of this paper is to derive sufficientconditions on b to ensure protected consensus.In the consensus protocol (1), the states of all the agentsare communicated, and hence, all prior works on quantizedconsensus [12]–[16] solely focus on the case where the statesof the agents are quantized. These works ensure that (practical)consensus can be achieved if the quantization process satisfiescertain conditions. However, there is no formal guarantee thatconsensus will be achieved if each agent quantizes and sharesa different signal other than its own state. We will showlater on that, in order to retain protection against adversarialeavesdropping, a different signal ( ξ ) than the state ( x ) has tobe quantized and transmitted. We therefore need to designa new quantization scheme under which consensus can beachieved even though each agent does not quantize andtransmit its own state. Later in this paper, we will followan approach similar to [16] to design dynamic quantizersto achieve quantized consensus with asymptotic protectionagainst adversarial eavesdropping. By a careful design of thedynamic quantizer parameters, the authors in [16] show that all states reach consensus. However, in our problem we cannotdirectly apply the dynamic quantizers of [16] since thosequantizers were designed to quantize the states that followthe consensus protocol (2) on a balanced graph G (i.e., L isa doubly stochastic matrix). The dynamic quantizers of [16]therefore do not provide any guarantee whether consensus willbe achieved if a different signal other than the state itself isquantized for a general digraph G . In Section VI we showhow the approach in [16] can be modified to achieve bothconsensus and adversarial protection.III. C ONSENSUS IN THE P RESENCE OF A DVERSARIES
We consider the scenario where the agents perform con-sensus in a compromised communication network and inter-agent communications are intercepted by adversarial agentsnot participating in the consensus. We further assume thatthe adversaries do not alter the transmitted messages overthe communication links. Instead, their primary objective is to estimate the states of the agents by intercepting the transmittedsignals. Such adversarial eavesdropping [32], [33] is commonin military and defense applications where the agents decide arendezvous point/trajectory by using a consensus algorithm.If the adversaries have a perfect estimate of the agents’states then they can ambush the agents before the agentscould achieve their goal. Since adversaries do not alter thetransmitted messages, the agents are not able to detect thepresence of eavesdropping adversaries. In contrast, when theadversaries actively take part in the consensus by injectingfalse data to corrupt the consensus outcomes, the agentscan detect such malicious intents and remove the adversariesfrom the consensus process; see for example [22], [23]. Inthe presence of eavesdropping adversaries, the agents cannotdetect the presence of such adversaries, and hence, they cannot change the consensus graph (e.g., by removing inter-agent compromised links) to prohibit the adversaries fromeavesdropping. As a result, protection against such passiveeavesdropping adversaries becomes more challenging.
A. Adversarial Eavesdropping Model
For the simplicity of this exposition, we consider a scenariowhere the number of adversaries is the same as the numberof agents. Each adversary is assigned to estimate the state ofone agent at all times. Without loss of generality, we assignadversary i to agent i , i.e., adversary i will estimate the stateof agent i ; such an estimate at time k will be denoted as ˆ x ad i ( k ) . The adversaries do not know the initial states of theagents. The only way therefore for adversary i to estimate thestate x i of agent i is by tapping the outgoing communicationlinks of agent i . The outgoing set N i contains all the agentsthat receive measurements from agent i . Adversary i theneavesdrops the outgoing links E ij , j ∈ N i to intercept themessages sent by agent i .The eavesdropping mechanism is probabilistic and may leadto failed interception of the message, similar to the model in[34]. Eavesdropping success is modeled by a Bernoulli randomvariable. Associated with adversary i , let µ i ( k ) ∈ { , } be the random variable representing the event of whethereavesdropping at time k was successful for adversary i or not,i.e., µ i ( k ) = ( , successful eavesdropping of agent i at time k, , otherwise . (4)The sequence of random variables { µ i ( k ) } k ∈ N is i.i.d, with P ( µ i ( k ) = 1) = γ i = 1 − P ( µ i ( k ) = 0) , for some γ i ∈ (0 , .For simplicity of the analysis, we consider γ i = γ for all i ∈ V . B. ǫ -Protection Against Imperfect Adversaries Let ˆ x ad ( k ) = [ˆ x ad i ( k ) , . . . , ˆ x ad N ( k )] T denote the collectiveestimate of the state x ( k ) of all agents by the adversaries attime k and let e ad ( k ) , x ( k ) − ˆ x ad ( k ) be the estimation error.It is noteworthy that at any time k the estimated state ˆ x ad ( k ) and the estimation error e ad ( k ) are random variables due tothe probabilistic nature of the eavesdropping mechanism. Since we are interested in protecting the agent state valuesfrom being perfectly estimated by the adversaries, we needto define a suitable metric to quantify such privacy. Thefollowing definition provides a metric for state-protection (orstate-privacy) that is adopted in this article. Definition 2.
A consensus algorithm is ǫ - protected against theadversaries for an initial state x (0) ∈ R N , if there exists ǫ > and c > such that, for all k ∈ N , E [ e ad ( k ) T e ad ( k )] ≥ ǫ k x ( k ) k + c. (5)The consensus algorithm is asymptotically ǫ - protected if lim inf k →∞ E [ e ad ( k ) T e ad ( k )] ≥ ǫ k x ∞ k + c. (6)The algorithm is called - protected ( asymptotically -protected ) if there does not exist any ǫ > such that (5)(respectively (6)) holds.The constant c in Definition 2 guarantees that, for a pro-tected consensus algorithm, E [ e ad ( k ) T e ad ( k )] > even when x ( k ) = 0 , and similarly, for an asymptotically protected algo-rithm, lim inf k →∞ E [ e ad ( k ) T e ad ( k )] > even when x ∞ = 0 . Remark 1.
Based on Definition 2, we notice that asymptotic ǫ -protection is a necessary condition for ǫ - protection . Therefore,if the agents are prevented from achieving asymptotic ǫ -protection, then they cannot achieve ǫ -protection. Thus, it issufficient for the adversaries to ensure that the agents cannotachieve asymptotic ǫ -protection, for any ǫ > , in order tomake the consensus protocol unprotected, i.e., - protected . C. Imperfect and Partial Knowledge of the Consensus Model
It is assumed that the adversaries know the communicationprotocol that the agents are using. In addition, we assumethat adversary i does not know the underlying inter-agentcommunication structure or the parameters { κ ij } j ∈N i . Inmany applications, the consensus matrix L is time varyingand randomly chosen (e.g., push-sum algorithms, gossip al-gorithms [35]), and hence, the adversaries cannot estimate L . The scenario considered in this paper is an instance of symmetric partial knowledge , in the sense that all adversarieshave the same information. The scenario where adversarieshave asymmetric knowledge of the consensus model (e.g.,some adversaries know more elements of the L matrix thanother adversaries) is also of interest, but it is beyond the scopeof this work.IV. C LASSICAL C ONSENSUS IS -P ROTECTED
We first show that the classical consensus protocol, whereeach agent shares its state x i ( k ) , is not protected from eaves-dropping. In particular, the adversaries are able to design analgorithm to construct an estimate ˆ x ad ( k ) so that the consensusprotocol in (2) is - protected . In order to see this, let adversary i follow the estimation dynamics ˆ x ad i ( k ) = µ i ( k ) x i ( k ) + (1 − µ i ( k ))ˆ x ad i ( k − , ˆ x ad i ( −
1) = 0 . (7)By following (7), each adversary updates its estimate onlywhen it has successfully eavesdropped its assigned agent (i.e., µ i ( k ) = 1 ), otherwise it retains the previous estimate.The main result of this section is presented in the followingtheorem. Theorem 3.
The consensus protocol (2) is -protected . Proof.
The proof is presented in Appendix C.Intuitively, it makes sense that the classical consensusalgorithm will be - protected for the following reason. Wehave that P (max t µ i ( t ) = 0) = 0 , or in other words, thereexists a future time when eavesdropping will be successfulalmost surely. Since x ( k ) → x ∞ , successful interception ofa transmission late enough will reveal the consensus value tothe adversary.The estimation dynamics (7) of adversary i , which doesnot require knowledge of the network parameters κ ij , ensuresthat the classical consensus protocol is not asymptotically ǫ - protected for any ǫ > , and hence it is - protected (Remark 1). Since the estimation dynamics (7) depend neitheron the network parameters nor on the dynamics (2), we obtainthe following corollary from Theorem 3. Corollary 2.
Any consensus protocol requiring that the agentsshare their actual state is - protected .In contrast to Theorem 3, which shows that the classicalconsensus protocol (2) is - protected , Corollary 2 states amuch stronger result, namely, that any consensus protocolwith true state communication is also - protected . Corollary2 motivates the search for new consensus algorithms that donot require that the agents share their actual state in order toachieve ǫ - protection .V. I NNOVATION -B ASED C OMMUNICATION S CHEME
In the classical consensus protocol (1), the purpose of shar-ing the state x j ( k ) is to help agent i compute its control input u i , since u i contains the term κ ij x j ( k ) in (1b). This sharingof the true state over the compromised link makes the system - protected against adversaries, as discussed in the previoussection. Therefore, in hoping to achieve ǫ - protection for some ǫ > , the agents must not send their true state information.However, in order to follow the consensus dynamics (1c),agent i must have knowledge of x j ( k ) for all j ∈ N i . In orderto achieve both these objectives (consensus and ǫ - protection ),we propose the innovation-communication consensus (ICC)protocol in Algorithm 1 that prescribes agent i to communicatean innovation signal ξ i instead of its actual state x i .The innovation signal ξ i in Algorithm 1 is very similar tothe signal u i in (1b) except for the fact that u i requires thetrue state value x j and ξ i uses an estimate ˆ x ij . Since the agentsdo not receive the true state values from their neighbors, theyneed to locally estimate their neighbors’ states. In Algorithm 1, ˆ x ij ( k ) denotes the estimate of agent j ’s state by agent i .However, in order to estimate the neighbors’ states, agent i does not require knowledge of agent j ’s local parameters (e.g., { κ jℓ } ℓ ∈ N j ). The sequence of events that take place locally foragent i from time-step k to time-step k + 1 is as follows: · · · → send ξ i ( k − , receive { ξ j ( k − } j ∈N i → update the estimate ˆ x ij ( k ) → compute ξ i ( k ) → update (8)the state x i ( k +1) → send ξ i ( k ) , receive { ξ j ( k ) } j ∈N i → · · · Algorithm 1
Innovation-Communication Consensus (ICC) Define ξ i ( −
1) = x i (0) for all i ∈ V for k ≥ do for all i = 1 , , . . . , N do agent i sends ξ i ( k − to j ∈ N i , for all j ∈ N i do ˆ x ij ( k ) = ˆ x ij ( k −
1) + ξ j ( k − , ˆ x ij ( −
1) = 0 , end for ξ i ( k ) = P j ∈N i κ ij (ˆ x ij ( k ) − x i ( k )) x i ( k + 1) = x i ( k ) + ξ i ( k ) end for end for The following two theorems characterize the key propertiesof the ICC algorithm. Theorem 4 ensures that the consensusvalue of the ICC algorithm is the same as the classicalconsensus protocol of (1), and Theorem 5 ensures that theICC consensus protocol is ǫ - protected . Theorem 4.
By following the ICC algorithm, x ( k ) convergesto x ∞ given in (3). Proof.
The proof is presented in Appendix D.From the proof of Theorem 4 (in particular, from equations(33), (34) and the subsequent discussion) we obtain that ˆ x ij ( k ) = x j ( k ) for all i ∈ V and for all j ∈ N i , where ˆ x ij ( k ) isdefined in line 6 of Algorithm 1. Consequently, it follows fromline 8 of Algorithm 1 that ξ i ( k ) = P j ∈N i κ ij ( x j ( k ) − x i ( k )) .Line 9 of the algorithm can then be rewritten as x i ( k + 1) = x i ( k ) + P j ∈N i κ ij ( x j ( k ) − x i ( k )) , which is the same asequation (1c). Therefore, from the agents’ perspective, theICC algorithm does not alter the consensus dynamics, and theagents still follow (1c) even though they are communicatingvia the signals ξ i ( k ) which are not the agents’ true state values.In Section IV (discussion after Theorem 3) we notice thatthe classical consensus protocol is - protected due to thepossibility that a late interception of a transmission revealsthe consensus value, as the state converges exponentially.However, from the construction of Algorithm 1, we noticethat ξ i ( k ) → as k increases. Thus, late interception of atransmission does not reveal any further information about x i ( k ) since ξ i ( k ) becomes less and less correlated to x i ( k ) as k increases. Theorem 5.
The ICC algorithm is (1 − γ ) - protected . Proof.
The proof is presented in Appendix E.We conclude this section with the following remark.
Remark 2.
The construction of the ICC algorithm is basedon the idea that an error incurred due to failed eavesdroppingcannot be compensated entirely by successful eavesdropping in the future. The amount of protection obtained by the ICCalgorithm does not depend on the network structure or theinitial state x (0) , rather, it solely depends on the eavesdroppingsuccess parameter γ . As expected, the amount of protectiondecreases as γ increases.VI. C ONSENSUS WITH L IMITED D ATA -R ATE
Although quantized consensus is a well-studied problemin the literature [13]–[16], in all those studies the states x i ( k ) themselves are quantized and transmitted. As shown inSection V, in order to ensure protection against adversaries,instead of the state x i ( k ) the signal ξ i ( k ) is transmitted. Asa result, agent i would receive a quantized version of ξ j ( k ) from its neighbors j ∈ N i . We therefore need to ensure thatthe agents reach consensus while only using the quantizedversion of ξ i ( k ) . However, not all quantization schemes thatensure satisfaction of the data-rate constraint can also ensureconsensus (and stability) [16]. Therefore, it is important toindependently verify the existence of a quantizer that ensuresconsensus and also satisfies the data-rate constraint. With thisobjective in mind, next we design (dynamic) quantizers for theagents to achieve consensus using the quantized transmittedsignals ξ i ( k ) . A. Dynamic Fixed-Length Quantizers
A quantizer q : ( x min , x max ) ⊆ R → { , . . . , b − } isa function that maps any x ∈ ( x min , x max ) to a b -bit binaryword. For example, a uniform quantizer takes the form q ( x ) = (cid:22) x − x min ∆ (cid:23) , where ∆ = ( x max − x min ) / b . Although q is defined on theinterval ( x min , x max ) , one can extend the definition of q onthe entire real line as follows ˜ q ( x ) = q ( x ) , x ∈ ( x min , x max ) , , x ≤ x min , b − , x ≥ x max , where ˜ q is the extended version of q . The region outside theinterval [ x min , x max ] is the saturation region of the quantizer ˜ q . For a dynamic b -bit quantizer, the boundaries x min and x max are not fixed and are given as extra inputs. That is, a dynamic b -bit quantizer is a mapping of the form q d : R × R × R + →{ , . . . , b − } such that q d ( x, α, β ) = (cid:22) x − α ∆ (cid:23) , x ∈ ( α, α + β ) , , x ≤ α, b − , x ≥ α + β, where ∆ = β/ b . The dynamic nature of the quantizeremerges when the parameters α and/or β are varied over time.By construction, such quantizers are defined over the wholereal line R with saturation region ( −∞ , α ] ∪ [ α + β, ∞ ) .While q d encodes a real-valued signal to a b -bit binary word,the decoding of the quantized signal is performed as follows: x q = q d ( x, α, β )∆ + ∆2 + α, (9) where x q is the decoded version of x after being quantizedby q d . To simplify the terminology in the subsequent sections,we will often refer to x q as quantized x for any signal x .Let ∆ q ( x ) , x − x q be the quantization error associated withquantizer q d . One then observes that for any x ∈ ( α, α + β ) | ∆ q ( x ) | = | x − x q | ≤ ∆2 = β b +1 . (10)Dynamic quantizers for higher dimensions can be de-fined accordingly. For instance, for any x, α ∈ R n and β ∈ R n + , define q d ( x, α, β ) = [ q d ( x , α , β ) , q d ( x , α , β ) ,. . . , q d ( x n , α n , β n )] T to be an n -dimensional vector-valuedquantizer. Similarly, decoding of an n -dimensional quantizedvector is performed element-wise using (9). The quantizationerror vector ∆ q ( x ) , [∆ q ( x ) , . . . , ∆ q ( x n )] T is defined ac-cordingly. We will only consider dynamic quantizers for thesubsequent sections and will henceforth suppress the subscript d by simply writing q for brevity. B. Design of a Dynamic Quantizer
Recall that Algorithm 1 (line 4) requires agent i to sent ξ i ( k − at time k to its outgoing neighbors N i . In thissubsection, we design dynamic quantizers for the agents toquantize their signals ξ i ( k − at time k prior to transmission.Without loss of generality, let us assume that there exists x max > x min such that x i (0) ∈ ( x min , x max ) for all i ∈ V . Inorder to send ξ i ( k − at time step k , agent i uses a dynamicquantizer of the form q i ( · , k ) , q ( · , α i ( k ) , β ( k )) as follows α i ( k ) = ξ qi ( k − − β ( k )2 , (11) β ( k + 1)2 = 3 √ N β ( k )2 b +1 + 4 √ N b +1 k − X ℓ =0 | λ | k − − ℓ β ( ℓ ) (12) + 4 | λ | k − √ N max {| x min | , | x max |} ,α i (0) = x min , β (0) = ( x max − x min ) , (13) β (1) = 3 max {| x min | , | x max |} , (14)where | λ | is the absolute value of the second largest eigen-value of matrix L as denoted in Theorem 1. Let ∆ qi ( · , k ) bethe quantization error associated with the quantizer q i ( · , k ) attime k . Then from (10), we have | ∆ qi ( ξ, k ) | ≤ β ( k ) / b +1 , (15)for all ξ ∈ [ α i ( k ) , α i ( k ) + β ( k )] .As observed in (9), in order to decode the output of thequantizer q ( x, α, β ) , the receiver needs the values of theparameters α and β . Therefore, if agent j receives a quantizedmeasurement from agent i that uses q i ( · , k ) to quantize itssignal, then agent j needs to know α i ( k ) and β ( k ) in orderto decode the quantized version of ξ i . From the expressionof β ( k ) in (12), notice that all agents can precompute β ( k ) as long as they have knowledge of λ and N . A slightlyconservative value of β ( k ) may be derived that does notrequire knowledge of the network parameter λ . Here weassume that the agents have computed the values of β ( k ) ’sprior to starting the consensus algorithm.Recall from (8) that agent i computes ξ i ( k ) during theinterval [ k, k + 1] and at time-step k + 1 agent i quantizes ξ i ( k ) using the quantizer q i ( · , k + 1) with the corresponding α i ( k + 1) = ξ qi ( k − − β ( k + 1) / . Agent i then forwards thequantized measurement q i ( ξ i ( k ) , k + 1) to agent j via the link E ij . During the time interval [ k, k + 1] agent j received thequantized version of ξ i ( k − from agent i and the decodedsignal ξ qi ( k − is now available to agent j at time k + 1 .Therefore, at time k + 1 agent j is able to compute α i ( k + 1) since it knows both ξ qi ( k − and β ( k + 1) . Upon receivinga quantized measurement q i ( ξ i ( k ) , k + 1) at time k + 1 , agent j can use the decoder defined in (9) to decode the receivedquantized measurement.In the subsequent sections we show that the use of adynamic quantizer (11)-(14) indeed leads to consensus. Be-fore showing this fact, we provide the following lemma thatcharacterizes certain useful properties of β ( k ) . Lemma 1.
If the b -bit dynamic quantizer (11)-(14) is usedto encode each transmission, then β ( k ) → exponentially,assuming b is selected as follows b >
12 log N + log (cid:18) − | λ | − | λ | (cid:19) . (16) Proof.
The proof is presented in Appendix F.For the dynamic quantizer q i ( · , k ) the parameter β ( k ) represents the quantization range and is directly associatedwith the quantization error as per equation (15). Therefore,as β ( k ) goes to zero the quantization error goes to zero aswell. Lemma 1 provides a sufficient condition for the bit-rate b to ensure that the quantization error converges to zero. WhileLemma 1 ensures that β ( k ) converges to zero, it however doesnot comment on the rate of convergence. The following lemmacharacterizes the rate ( η ) at which β ( k ) converges to zero. Lemma 2.
Let > η > | λ | . If the number of bits used inthe quantization scheme satisfies b = 12 log N + log (cid:18) η − | λ | ) η ( η − | λ | ) (cid:19) , (17)then β ( k ) ≤ ¯ βη k , (18)where ¯ β is a constant that depends on β (0) and β (1) . Proof.
Please see Appendix G for the proof.
C. Protected Consensus with Dynamic Quantizers
Let x , [ x , x , · · · , x N ] T denote the initial states ofthe agents. Under quantized consensus, let each agent followthe dynamics x i ( k + 1) = x i ( k ) + ξ qi ( k ) , x i (0) = x qi (19a) ξ i ( k ) = X j ∈N i κ ij (ˆ x ij ( k ) − x i ( k )) , ξ i ( −
1) = x i , (19b) ˆ x ij ( k ) = ˆ x ij ( k − ξ qj ( k − , ˆ x ij ( −
1) = 0 , j ∈ N i , (19c)where x qi and ξ qi ( k ) represent quantized x i and ξ i ( k ) , respec-tively. Equations (19a)-(19c) are similar to the ones proposedin Algorithm 1, except that now each agent uses the quantizedsignal ξ qi ( k ) instead of ξ i ( k ) and x qi instead of x i . Notethat, in absence of quantization we have that x qi = x i and ξ qi ( k ) = ξ i ( k ) for all k , and hence, the equations (19a)-(19c)become exactly the same as the ones in Algorithm 1.Although agent i may have access to x i and ξ i , it still usesthe quantized values x qi and ξ qi in its dynamics (19a). This isto ensure that consensus is achieved. Surprisingly, using thetrue values x i and ξ i in (19a) may not lead to a consensus.A direct consequence of every agent using quantized valuesin (19a) is that agent i can perfectly estimate agent j ’s state,i.e. ˆ x ij ( k ) = x j ( k ) for all k ∈ N and j ∈ N i . To see this,notice from equation (19a) that agent j ’s dynamics follow x j ( k + 1) = x j ( k ) + ξ qj ( k ) , x j (0) = x qj . (20)Comparing agent j ’s dynamics (20) with (19c), we obtain that ˆ x ij ( k ) = x j ( k ) for all k = N and i ∈ V and j ∈ N i . Thus,by replacing ˆ x ij ( k ) with x j ( k ) , we may re-write (19a)-(19c)as follows x i ( k + 1) = x i ( k ) + ξ qi ( k ) , x i (0) = x qi , (21a) ξ i ( k ) = X j ∈N i κ ij ( x j ( k ) − x i ( k )) , ξ i ( −
1) = x i , (21b)Before proceeding further, we present the following lemmathat will be paramount in the subsequent analysis. Lemma 3.
For all k ∈ N , ξ i ( k − ∈ [ α i ( k ) , α i ( k ) + β ( k )] . Proof.
See Appendix H for the proof.
Remark 3.
A direct consequence of Lemma 3 is that thesignal ξ i ( k − does not lie in the saturation region ofquantizer q i ( · , k ) and hence, when quantized using q i ( · , k ) ,it obeys the bound | ∆ qi ( ξ i ( k − , k ) | ≤ β ( k )2 b +1 . (22)By substituting the expression of α i ( k ) from (11), we havefrom Lemma 3 that | ξ i ( k ) − ξ qi ( k − | ≤ β ( k + 1) / . If thenumber of quantization bits b satisfies (16) (or (17)), then, asa result of Lemma 1 (or Lemma 2) and Lemma 3, we have lim k →∞ | ξ i ( k ) − ξ qi ( k − | ≤ lim k →∞ β ( k + 1)2 = 0 . (23)This shows that ξ ( k ) converges to ξ q ( k − . Furthermore, wealso obtain | ξ i ( k ) − ξ i ( k − | ≤| ξ i ( k ) − ξ qi ( k − | + | ξ i ( k − − ξ qi ( k − |≤ β ( k + 1)2 + | ∆ qi ( ξ i ( k − , k ) | = β ( k + 1)2 + β ( k )2 b +1 , and, consequently, it follows that lim k →∞ | ξ i ( k ) − ξ i ( k − | = 0 . (24)Based on (24) and (23), we conclude that the signals ξ ( k ) and ξ q ( k ) converge. The following theorem shows that, byusing dynamic quantizers, the states x ( k ) converge, but notnecessarily to the value x ∞ given in (3). The theorem alsocharacterizes the derivation of the consensus value from x ∞ due to quantization. Theorem 6.
For the dynamic quantizers designed in (11)-(14), assume that the number of bits b satisfies (17) for some η ∈ ( | λ | , . Then there exists c ∈ R such that lim k →∞ x k = ˜ x ∞ = c. (25)Furthermore, k ˜ x ∞ − x ∞ k ≤ (cid:18) β (0) + ¯ βη − η (cid:19) √ N − ( b +1) , (26)where ¯ β = (cid:13)(cid:13)(cid:13)(cid:13)(cid:20) β (1) β (0) (cid:21)(cid:13)(cid:13)(cid:13)(cid:13) . Proof.
See Appendix I for the proof.From Theorem 6 it follows that the deviation betweenthe desired consensus value x ∞ = w T ℓ x and the achievedconsensus value ˜ x ∞ decreases exponentially with the numberof available bits b and increases sub-linearly with the numberof agents N . The network topology among the agents (i.e.,the L matrix) affects the bound in (26) through the quantity η since η > | λ | . Note that b in (26) depends on η through(17).In (26), note that the right-hand-side is an increasing func-tion of η for η ∈ ( | λ | , . Thus, to reduce the deviation k ˜ x ∞ − x ∞ k , a lower value of η is desired. At the sametime, from (17), b increases as η is reduced. This shows adesired trade-off between the quantization accuracy and thedeviation k ˜ x ∞ − x ∞ k . While Theorem 6 proves that consensusis achieved under finite bit-rate constraint, we now state thefollowing theorem which proves that the ICC algorithm undera bit-rate constraint is (1 − γ ) - protected . Theorem 7 (Main Result) . Under the assumption that thenumber of quantization bits b satisfies (17) for some η ∈ ( | λ | , , the ICC algorithm with finite bit-rate constraint is (1 − γ ) - protected . Proof.
See Appendix J for the proof.We conclude this section with the following remark thatpresents the total bit-rate required for a network to achieveconsensus.
Remark 4.
Recall from Section II-B that for a network with N = P i ∈V N i communication links and a total communi-cation bit-rate of B bits per time-step, the available bits foreach transmission is ⌊B / N ⌋ − b , where b is the number ofoverhead bits required for each transmission. Therefore, fromLemma 1, a sufficient condition for the network G to achieveconsensus with (1 − γ ) - protection under bit-rate constraint of B bits per time-step is B > N (cid:18) b + 12 log N + log (cid:18) − | λ | − | λ | (cid:19)(cid:19) . (27)In summary, the analysis of this section shows thatthe following BICC (Bit-rate Constrained Innovation-Communication Consensus) algorithm (Algorithm 2) achieves (1 − γ ) - protected consensus as long as (27) is satisfied(or equivalently, there exists η ∈ ( | λ | , such that B ≥N ( b + b ) where b satisfies (17)). Algorithm 2
Bit-rate Constrained ICC (BICC) Initialize ξ i ( −
1) = x i and x i (0) = x qi for all i ∈ V for k ∈ N do for all i = 1 , , . . . , N do agent i sends ξ i ( k − to j ∈ N i , agent i receives ξ qj ( k − to j ∈ N i , for all j ∈ N i do ˆ x ij ( k ) = ˆ x ij ( k −
1) + ξ qj ( k − , ˆ x ij ( −
1) = 0 , end for ξ i ( k ) = P j ∈N i κ ij (ˆ x ij ( k ) − x i ( k )) x i ( k + 1) = x i ( k ) + ξ qi ( k ) end for end for Fig. 1. A network of agents where N i = { j : ( i − j ) mod 25 = 2 } . VII. S
IMULATION R ESULTS
We simulate the ICC algorithm on a network G of agents. For this numerical experiment, the initial states ofthe agents are chosen randomly within the interval [4 , .The connectivity E of this network forms a ring structurewhere agent i sends information to agent i + 2 and receivesinformation from agent i − , i.e., agent receives informationfrom agent and sends information to agent and so forth.A schematic diagram of the network is shown in Fig. 1. Theweights { κ ij } i,j ∈V of the network are generated randomly,and the adversarial eavesdropping probability γ is chosen tobe . .State trajectories for randomly selected agents are plottedin Fig. 2. In the same figure, we also plot the quantities max i x i ( k ) and min i x i ( k ) using black dashed lines. Theseblack dashed lines represent the envelope of the agents’ states,i.e., the state trajectory of any agent remains within these twolines. In Fig. 3 we plot mean estimation errors E [ e ad i ( k )] for randomly chosen adversaries. In the same figure, we alsoplot the mean estimation error envelope, i.e., max i E [ e ad i ( k )] and min i E [ e ad i ( k )] using black dashed lines. To comparethe performances of the consensus protocol (2) and the ICCprotocol of Algorithm 1, we use two subplots in Fig. 3 toplot the mean estimation errors under both these protocols.As seen in Fig. 3, the mean estimation errors go to zero whenthe agents follow the consensus protocol (2). In contrast, the Time S t a t e s o f A g e n t s Fig. 2. State trajectories x i ( k ) of agents are shown using solid colored lines.The upper and lower black dashed lines represent the quantities max i x i ( k ) and min i x i ( k ) , respectively. The final consensus value is 4.844. Time
Time
Fig. 3. Top: Mean estimation error E [ e ad i ( k )] when agents following protocol(2) share their true state value. We zoom within the time interval [0 , fora better visualization. Bottom: Mean estimation error E [ e ad i ( k )] when agent i following Algorithm 1 shares ξ i ( k ) . mean estimation errors converge to the vector (1 − γ ) x ∞ when the agents follow the ICC protocol. We also plot the protection level ( E [ e ad ( k ) T e ad ( k )] / k x ( k ) k ) from adversarialeavesdropping in Fig. 4. We observe that the ICC protocol isat least (1 − γ ) - protected (since γ = 0 . ) while the consensusprotocol (2) is - protected .Figs. 2–4 show the performance of the system without anycommunication constraint. Next, we study the same networkwhile considering a finite bit-rate constraint. For this problem,we obtain | λ | = 0 . and hence, the right-hand-side of(16) evaluates to . . Therefore, b ≥ is a sufficientcondition to ensure consensus according to Lemma 1. Weconducted several experiments by choosing the values of b tobe , , , and . The results in Figs. 2–4 correspondof the case without any quantization, i.e., b = ∞ . In Fig. 5 weplot the consensus envelope ( max i x i ( k ) and min i x i ( k ) ) for Time -10 -5 Time
Fig. 4. Top: E [ e ad ( k ) T e ad ( k )] is plotted over time. Since E [ e ad ( k ) T e ad ( k )] converges to zero fast, we use a logarithmic scale to better demonstrate itstrajectory. The agents use consensus protocol (2) and share their state. Bottom: E [ e ad ( k ) T e ad ( k )] is plotted over time for the case when the agents use theICC protocol and share the signal ξ ( k ) . Time
Fig. 5. The upper and lower dashed lines represent max i x i ( k ) and min i x i ( k ) , respectively. The interval [500 , is zoomed in for bettervisualization. Different colors represent different values of b . different values of b . We observe that for b = 12 , and ,the performance is very similar to the unquantized case. Theconsensus values for b = 12 , and are . , . and . , respectively. Given that consensus value is . for the unquantized case, we notice that the effects of thefinite bit-rate is practically indistinguishable. We notice thatfor b = 11 , the consensus envelope deviates from the trueconsensus envelope ( b = ∞ ). Furthermore, in this case,consensus is not achieved as the consensus envelope does notconverge at a single value. We also considered a case when b = 10 , and we see that the consensus dynamics becomesunstable and the consensus envelope diverges. This is plottedin the subplot within Fig. 5.In Fig. 6, we plot the mean estimation error enve-lope ( max i E [ e ad i ( k )] and min i E [ e ad i ( k )] ) and the protectionamount E [ e ad ( k ) T e ad ( k )] / k x ( k ) k for b = 11 , , and Time M ea n e s ti m a ti on e rr o r e nv e l op e Time P r o t ec ti on Fig. 6. Top: Mean estimation error envelop max i E [ e ad i ( k )] and min i E [ e ad i ( k )] for different values of b . Bottom: The trajectory of E [ e ad ( k ) T e ad ( k )] is plotted for different values of b . and we notice that for every choice of b , the proposed ICCalgorithm ensures at least (1 − γ ) = 0 . protection. In fact,the simulated amount of protection is almost twice of thetheoretically predicted amount. Interestingly, for b = 11 , theamount of protection is higher than any other case. However,in this case the agents do not achieve consensus.VIII. C ONCLUSION
In this work we revisit the consensus problem among agroup of agents. We consider an adversarial model where theadversaries are not participating in the consensus, but rather,they are monitoring and intercept intermittently the inter-agentcommunications. The adversarial eavesdropping mechanismis assumed to be imperfect and the agents exploit this im-perfection to devise a new consensus protocol. We define ametric for protection against such adversaries and present an ǫ - protected consensus algorithm that does not require the agentsto share their true state values in order to achieve consensus.We show that the proposed consensus algorithm does not affectthe consensus value. Existing consensus protocols to achieveprivacy lack formal guarantees of consensus with finite data-rate constraint. We explicitly consider the bit-rate constraint ofthe whole network and design dedicated dynamic quantizersfor the agents to ensure that consensus can be achieved ifthe bit-rate is higher than a lower bound that depends on thenumber of agents and the graph matrix L .IX. A CKNOWLEDGEMENTS
The authors greatly appreciate the valuable comments andsuggestions for improvement from Anastasios Tsiamis andGeorge J. Pappas on an earlier version of this manuscript.A
PPENDIX AP ROOF OF T HEOREM LL = Q Λ R, (28) where Q = R − and Λ is the Jordan matrix. Since the com-munication graph G is strongly connected, the unit eigenvalueof L has algebraic and geometric multiplicity one [11], [31].Thus, Λ can be written as Λ = (cid:20) × N − N − × Λ (cid:21) , for some matrix Λ ∈ R n − × n − . Thus, we can write L as L = (cid:2) q , q , . . . , q N (cid:3) (cid:20) × N − N − × Λ (cid:21) r T r T ... r T N = q r T + (cid:2) q , . . . , q N (cid:3) Λ r T ... r T N , q r T + L , (29)where r T i is the i -th row of R and q j is the j -th column of Q and where we have used the fact that Q = R − implies r T i q j = 1 if i = j and r T i q j = 0 if otherwise. Since L , Q and R are related by the Jordan decomposition (28), each q i is aright eigenvector of L and each r T i is a left eigenvector of L .Furthermore, since r T i q = 0 for all i = 1 , we obtain from(29) that Lq = q , implying that q = q for some q ∈ R . By defining w ℓ = qr ,we obtain L = w T ℓ + L where w T ℓ L = 0 and w T ℓ = r T q = 1 .A PPENDIX BP ROOF OF P ROPOSITION k = 1 .Now, assume that it holds for some k > . Then, by using (i)and (ii), ( L − J ) k +1 = ( L k − J k )( L − J )= L k +1 − J k L − L k J + J k +1 = L k +1 − J k +1 . For statement (iv), we observe that ( L − I ) J k = LJ k − J k = J − J = 0 from (i) and (ii). Consequently, by using (iii), weobtain ( L − I ) L k = ( L − I ) L k − ( L − I ) J k = ( L − I )( L k − J k )= ( L − I )( L − J ) k . For statement (v), notice from the proof of Theorem 2that L = J + L , where the eigenvalues of L are inthe set { λ , . . . , λ N } . Due to Theorem 1, we have that | λ | = max {| λ | , . . . , | λ N |} and hence the spectral radius of J = L − J satisfies ρ ( L − J ) = ρ ( L ) = | λ | .Finally, (vi) is a direct consequence of (v). A PPENDIX CP ROOF OF T HEOREM M ( k ) , I − diag[ µ ( k ) , · · · , µ N ( k )] . Then, ˆ x ad ( k ) and the estimation error e ad ( k ) = x ( k ) − ˆ x ad ( k ) follow thedynamics ˆ x ad ( k ) = ( I − M ( k )) x ( k ) + M ( k )ˆ x ad ( k − ,e ad ( k ) = M ( k ) (cid:0) e ad ( k −
1) + x ( k ) − x ( k − (cid:1) = M ( k ) e ad ( k −
1) + M ( k )( L − I ) x ( k − . (30)In order to prove the claim that the classical communicationscheme (that transmits the states of the agents) leads to - protection , we first show that the random variable e ad ( k ) converges to in the mean-square sense as k goes to infinity.Taking expectation on both sides of (30), we obtain E [ e ad ( k )] = (1 − γ ) E [ e ad ( k − − γ )( L − I ) x ( k − , (31)where we have used the fact that M ( k ) is uncorrelated to e ad ( k − . Since < γ < and lim k →∞ x ( k ) → x ∞ , E [ e ad ( k )] converges to a finite value. In fact, lim k →∞ E [ e ad ( k )] =(1 − γ ) lim k →∞ E [ e ad ( k − − γ )( L − I ) lim k →∞ x ( k − ( a ) = (1 − γ ) lim k →∞ E [ e ad ( k − , where ( a ) follows from the fact that x ∞ = Lx ∞ , and hence lim k →∞ E [ e ad ( k )] =0 . Furthermore, we have that Σ ad ( k ) , E [ e ad ( k ) e ad T ( k )]= (1 − γ ) Σ ad ( k − − γ ) ( L − I ) x ( k − x T ( k − L T − I )+ (1 − γ ) ( L − I ) x ( k − E [ e ad ( k − T + (1 − γ ) E [ e ad ( k − x T ( k − L T − I )+ γ (1 − γ )diag[ { ( x i ( k ) − x i ( k − } i ∈V ] , (32)where in deriving (32) we have used the fact that the ij -thelement of the matrix M ( k )( L − I ) x ( k − x ( k − T ( L − I ) T M ( k ) T is (1 − µ i ( k ))(1 − µ j ( k ))( x i ( k ) − x i ( k − x j ( k ) − x j ( k − , and its expected value is (1 − γ ) ( x i ( k ) − x i ( k − x j ( k ) − x j ( k − if i = j and (1 − γ )( x i ( k ) − x i ( k − if otherwise.Since (1 − γ ) < , Σ ad ( k ) has stable dynamics and asthe last four terms in (32) converge to the zero matrix, itfollows that Σ ad ( k ) also converges. Furthermore, by observingthat lim k →∞ ( L − I ) x ( k ) = 0 , we have from (32) that lim k →∞ Σ ad ( k ) = (1 − γ ) lim k →∞ Σ ad ( k −
1) = 0 . Since E [ e ad ( k ) T e ad ( k )] = tr(Σ ad ( k )) where, tr( · ) denotes the traceof a matrix, we obtain lim k →∞ E [ e ad ( k ) T e ad ( k )] = 0 , and consequently, the consensus protocol (2) is ( asymptoti-cally ) - protected . Since e ad ( k − is a function of { M (1) , . . . , M ( k − } and M ( k ) isindependent of the past { M ( s ) } s 1) = 0 . Based on the above dynamics, it is easy to see that ˆ x i j ( k ) = · · · = ˆ x i ℓ j ( k ) . Therefore, the estimates of the state x j ( k ) performed independently by the agents { i , · · · , i ℓ } actuallycoincide, and hence, we will simply write ˆ x j ( k ) to denote theestimate of x j ( k ) performed by any agent i such that i ∈ N j .Next, we show that, for all k ∈ N and i ∈ V , ˆ x i ( k ) = x i ( k ) . First, notice that ˆ x j ( k + 1) =ˆ x j ( k ) + ξ j ( k ) , (33) ˆ x j (0) = ξ j ( − 1) = x j (0) , and from Algorithm 1, the dynamics of agent j is x j ( k + 1) = x j ( k ) + ξ j ( k ) . (34)By comparing equations (33) and (34) we have that ˆ x j ( k ) = x j ( k ) for all k ∈ N and j ∈ V . Therefore, it follows that ξ i ( k ) = P j ∈N i κ ij (ˆ x j ( k ) − x i ( k )) = P j ∈N i κ ij ( x j ( k ) − x i ( k )) , and hence the ICC algorithm update step x i ( k + 1) = x i ( k ) + ξ i ( k ) (line 9, Algorithm 1) is equivalent to x i ( k + 1) = x i ( k ) + X j ∈N i κ ij ( x j ( k ) − x i ( k )) . Therefore, the evolution of the state x ( k ) executing the ICCalgorithm is the same as the state following the dynamics (1c),and hence, lim k →∞ x ( k ) = x ∞ . This completes the proof.A PPENDIX EP ROOF OF T HEOREM i executing Algorithm 1 is x i ( k + 1) = x i ( k ) + ξ i ( k ) , (35)where ξ i ( k ) is the new information, or innovation , that agent i accumulates by combining the incoming data from its neigh-bors N i , and then broadcasting it to its outgoing neighborsthrough the links E ij . Let I i ( k ) = { y i (0) , y i (1) , . . . , y i ( k ) } denote the information available to adversary i at time k ,where y i ( k ) = ( ξ i ( k − , if µ i ( k ) = 1 , ∅ , if µ i ( k ) = 0 , where ∅ denotes absence of a measurement and µ i ( k ) isdefined in (4). Recall that the adversaries try to minimize E [ e ad ( k ) T e ad ( k )] , which quantifies the amount of protectionaccording to Definition 2. If ˆ x ad i ( k ) is the estimate of x i ( k ) performed by adversary i based on the information I i ( k ) tominimize E [ e ad i ( k ) ] , then the optimal estimate is ˆ x ad i ( k ) = E [ x i ( k ) |I i ( k )] . Therefore, the estimation dynamics for theadversary becomes ˆ x ad i ( k + 1) =ˆ x ad i ( k ) + µ i ( k ) ξ i ( k ) . (36)Consequently, the estimation error e ad ( k ) has the dynamics e ad ( k + 1) = e ad ( k ) + M ( k ) ξ ( k ) , e ad (0) = M (0) ξ ( − , where M ( k ) , I − diag[ µ ( k ) , · · · , µ N ( k )] . Themean E [ e ad ( k )] and the second moment Σ ad ( k ) , E [( e ad ( k ) e ad ( k ) T ] of the error follow the dynamics E [ e ad ( k + 1)] = E [ e ad ( k )] + (1 − γ ) ξ ( k ) (37) E [ e ad (0)] = (1 − γ ) x (0)Σ ad ( k + 1) = Σ ad ( k ) + Γ( k ) (38) Σ ad (0) = (1 − γ ) x (0) x (0) T (39) + γ (1 − γ )diag[ ξ (0) , . . . , ξ N (0) ]Γ( k ) = (1 − γ ) ξ ( k ) ξ ( k ) T + (1 − γ ) (cid:0) E [ e ad ( k )] ξ ( k ) T + ξ ( k ) E [ e ad ( k ) T ] (cid:1) + γ (1 − γ )diag[ ξ ( k ) , . . . , ξ N ( k ) ] , (40)where we have used the fact that the ij -th element of the ma-trix M ( k ) ξ ( k ) ξ ( k ) T M ( k ) T is (1 − µ i ( k ))(1 − µ j ( k )) ξ i ( k ) ξ j ( k ) and E [(1 − µ i ( k ))(1 − µ j ( k )) ξ i ( k ) ξ j ( k )]= ( (1 − γ ) ξ i ( k ) ξ j ( k ) , if i = j, (1 − γ ) ξ i ( k ) , if i = j. Notice from (35) that x ( k + 1) = x ( k ) + ξ ( k ) = Lx ( k ) andthus ξ ( k ) = ( L − I ) x ( k ) = ( L − I ) L k x (0) . (41)From the dynamics of E [ e ad ( k )] given in (37), we obtain E [ e ad ( k + 1)] = E [ e ad (0)] + (1 − γ ) k X t =0 ξ ( t )= (1 − γ ) x (0) + (1 − γ )( L − I ) k X t =0 L t x (0)= (1 − γ ) L k +1 x (0) . (42)Using (41) and (42), we may simplify (40) as follows Γ( k ) =(1 − γ ) (( L − I ) L k x (0))(( L − I ) L k x (0)) T + (1 − γ ) ( L k x (0))(( L − I ) L k x (0)) T + (1 − γ ) (( L − I ) L k x (0))( L k x (0)) T + γ (1 − γ )diag[ ξ ( k ) , . . . , ξ N ( k ) ]=(1 − γ ) ( L k +1 x (0))( L k +1 x (0)) T − (1 − γ ) ( L k x (0))( L k x (0)) T + γ (1 − γ )diag[ ξ ( k ) , . . . , ξ N ( k ) ]=(1 − γ ) (cid:0) x ( k + 1) x ( k + 1) T − x ( k ) x ( k ) T (cid:1) + γ (1 − γ )diag[ ξ ( k ) , . . . , ξ N ( k ) ] . Therefore, from (38), we obtain Σ ad ( k ) = Σ ad (0) + k − X t =0 Γ( t )= (1 − γ ) x ( k ) x ( k ) T + γ (1 − γ ) k − X t =0 diag[ ξ ( t ) , . . . , ξ N ( t ) ] . Thus, for all k , E [ e ad ( k ) T e ad ( k )] = tr(Σ ad ( k )) ≥ (1 − γ ) k x ( k ) k + c where c = γ (1 − γ ) k x (0) k , and hence, theICC algorithm is (1 − γ ) - protected .A PPENDIX FP ROOF OF L EMMA β ( k ) in (12) one observes that forall k ∈ N , β ( k + 1)2 = 3 √ Nβ ( k )2 b +1 + 4 √ N b +1 k − X ℓ =0 | λ | k − − ℓ β ( ℓ )+ 4 | λ | k − √ N max {| x min | , | x max |} ,β ( k + 1)2 − √ Nβ ( k )2 b +1 = 4 √ N b +1 k − X ℓ =0 | λ | k − − ℓ β ( ℓ )+ 4 | λ | k − √ N max {| x min | , | x max |} = 4 √ N b +1 β ( k − 1) + | λ | β ( k )2 − √ N β ( k − b +1 ! . By rearranging terms, we obtain β ( k + 1) = √ N b + | λ | ! β ( k ) + √ N b (4 − | λ | ) β ( k − , and thus, (cid:20) β ( k + 1) β ( k ) (cid:21) = (cid:20) √ N b + | λ | √ N b (4 − | λ | )1 0 (cid:21) (cid:20) β ( k ) β ( k − (cid:21) . Therefore, β ( k ) follows a linear dynamics with initial condi-tions given in (13)-(14), and the dynamics is stable if and onlyif the eigenvalues of the transition matrix lie inside the unitcircle. Thus, β ( k ) → exponentially if √ N b + | λ | + vuut √ N b + | λ | ! + 4 √ N b (4 − | λ | ) < . The above condition can be simplified to the condition b > √ N (7 − | λ | )1 − | λ | , (43)which is equivalent to equation (16) of the lemma.A PPENDIX GP ROOF OF L EMMA β ( k ) satisfies thedynamics (cid:20) β ( k + 1) β ( k ) (cid:21) = (cid:20) √ N b + | λ | √ N b (4 − | λ | )1 0 (cid:21) (cid:20) β ( k ) β ( k − (cid:21) . One may verify that the maximum eigenvalue of the matrix " √ N b + | λ | √ N b (4 − | λ | )1 0 is √ N b + | λ | + vuut √ N b + | λ | ! + 4 √ N b (4 − | λ | ) . Substituting the value of b from (17) yields √ N b + | λ | + vuut √ N b + | λ | ! + 4 √ N b (4 − | λ | ) = 2 η. Therefore, from the linear dynamics of [ β ( k + 1) , β ( k )] T , oneobtains (cid:13)(cid:13)(cid:13)(cid:13)(cid:20) β ( k + 1) β ( k ) (cid:21)(cid:13)(cid:13)(cid:13)(cid:13) ≤ η (cid:13)(cid:13)(cid:13)(cid:13)(cid:20) β ( k ) β ( k − (cid:21)(cid:13)(cid:13)(cid:13)(cid:13) ≤ η k (cid:13)(cid:13)(cid:13)(cid:13)(cid:20) β (1) β (0) (cid:21)(cid:13)(cid:13)(cid:13)(cid:13) . Finally, we can write k β ( k ) k ≤ ¯ βη k where ¯ β = (cid:13)(cid:13)(cid:13)(cid:13)(cid:20) β (1) β (0) (cid:21)(cid:13)(cid:13)(cid:13)(cid:13) .Since β ( k ) ≥ for all k as per (12), we have β ( k ) = k β ( k ) k ≤ ¯ βη k . A PPENDIX HP ROOF OF L EMMA k = 0 and . Each agent initial-izes ξ i ( − 1) = x i in (21b). At time k = 0 , ξ i ( − is quantizedusing the quantizer q i ( · , with parameters α i (0) = x min and β (0) = ( x max − x min ) . Since x max ≥ x i ≥ x min for all i , itfollows that ξ i ( − ∈ [ α i (0) , α i (0)+ β (0)] . Hence, the lemmais true for k = 0 . At k = 1 , ξ i (0) = P j ∈N i κ ij ( x j ( k ) − x i ( k )) is quantized using the quantizer q i ( · , . Therefore, we obtain ξ i (0) − ξ qi ( − 1) = X j ∈N i κ ij ( x j (0) − x i (0)) − ξ qi ( − X j ∈N i κ ij ( x qj − x qi ) − x qi , which further yields k ξ i (0) − ξ qi ( − k ≤ (1 + X j ∈N i κ ij ) | x qi | + max j ∈N i | x qj | X j ∈N i κ ij ≤ {| x min | , | x max |} , where we have used the fact that P j ∈N i κ ij ≤ for all i .From the expressions of α i (1) and β (1) , we notice that ξ i (0) ∈ [ α i (1) , α i (1) + β (1)] , and thus, the lemma holds for k = 1 .Let us now assume that the lemma holds for all time-instances , , . . . , k , i.e, ξ i ( ℓ − ∈ [ α i ( ℓ ) , α i ( ℓ ) + β ( ℓ )] for all ℓ = 0 , , . . . , k . Consequently, we have from (15) that | ∆ qi ( ξ i ( ℓ − , ℓ ) | ≤ β ( ℓ ) / b +1 for all ℓ = 0 , , . . . , k . Wewill show that the lemma holds for time k + 1 . In order toproceed, note from (21b) that, for all k ∈ N , ξ ( k ) = ( L − I ) x ( k ) , (44)where ξ ( k ) = [ ξ ( k ) , . . . , ξ N ( k )] T . For any ξ ∈ R N , let usdefine the vector ∆ q ( ξ, k ) = [∆ q ( ξ , k ) , . . . , ∆ qN ( ξ N , k )] T ,where ∆ qi ( · , k ) = ( · ) − q i ( · , k ) is the quantization errorfunction of the dynamic quantizer q i ( · , k ) . Thus, we obtain from (21a) that x i ( k + 1) = x i ( k ) + ξ i ( k ) − ∆ qi ( ξ i ( k ) , k + 1) ,and hence, x ( k + 1) = Lx ( k ) − ∆ q ( ξ ( k ) , k + 1) , = L k +1 x (0) − k X ℓ =0 L k − ℓ ∆ q ( ξ ( ℓ ) , ℓ + 1) . (45)From (44) and (45), we obtain ξ ( k + 1) − ξ ( k ) = ( L − I )( x ( k + 1) − x ( k ))= ( L − I ) x ( k ) − ( L − I )∆ q ( ξ ( k ) , k + 1)= ( L − I ) (cid:16) L k x (0) − k − X ℓ =0 L k − − ℓ ∆ q ( ξ ( ℓ ) , ℓ + 1) (cid:17) − ( L − I )∆ q ( ξ ( k ) , k + 1) . (46)We therefore obtain that k ξ ( k ) − ξ q ( k − k = k ( ξ ( k ) − ξ ( k − q ( ξ ( k − , k ) k ( a ) ≤ k ( L − I ) L k − x (0) k + k − X ℓ =0 k ( L − I ) L k − − ℓ ∆ q ( ξ ( ℓ ) , ℓ + 1) k + k (2 I − L )∆ q ( ξ ( k − , k ) k ( b ) ≤ | λ | k − k x (0) k + 4 √ N b +1 k − X ℓ =0 | λ | k − − ℓ β ( ℓ + 1)+ 3 √ N β ( k )2 b +1 (47)where, we have used (46) to substitute k ( ξ ( k ) − ξ ( k − k in deriving inequality ( a ) , and in deriving inequality ( b ) , wehave first used Proposition 1 to replace ( L − I ) L t with ( L − I ) ( L − J ) t and further use that fact that ρ ( L − J ) ≤ | λ | .Then, using Theorem 1, we observe that k L − I k ≤ sinceall the eigenvalues of L have magnitudes less than or equalto , and similarly k I − L k ≤ . Finally, we have used theassumption that the lemma holds for all ℓ = 0 , . . . , k to con-clude that k ∆ q ( ξ ( ℓ − , ℓ ) k ≤ √ N max i | ∆ qi ( ξ i ( ℓ − , ℓ ) | ≤√ N β ( ℓ ) / b +1 . Since x i (0) = x qi = x i − ∆ qi ( x i , , weobtain k x (0) k ≤√ N max i (cid:16) | x i | + | ∆ qi ( x i , | (cid:17) ≤√ N max {| x min | , | x max |} + √ Nβ (0)2 b +1 . (48)Combining (47) and (48), we obtain k ξ ( k ) − ξ q ( k − k≤ √ N max {| x min | , | x max |}| λ | k − + 4 √ N b +1 k − X ℓ =0 | λ | k − − ℓ β ( ℓ ) + 3 √ N β ( k )2 b +1 ≤ β ( k + 1)2 . (49)Given that α i ( k + 1) = ξ qi ( k − − β ( k + 1) / in (11),we obtain from (49) that ξ i ( k ) ∈ [ α i ( k + 1) , α i ( k + 1) + β ( k + 1)] . Thus the lemma holds for k + 1 and the proof iscomplete. A PPENDIX IP ROOF OF T HEOREM φ ( k ) = x ( k + 1) − Lx ( k ) . From(21b) we have x ( k + 1) = Lx ( k ) − ∆ q ( ξ ( k ) , k + 1) , (50)and hence, φ ( k ) = − ∆ q ( ξ ( k ) , k + 1) . Therefore, lim k →∞ k φ ( k ) k = k ∆ q ( ξ ( k ) , k + 1) k ≤ lim k →∞ β ( k + 1)2 b +1 , where we used (22). Since b is chosen such that (17) issatisfied, we have that lim k →∞ β ( k ) = 0 due to Lemma 2.Therefore, φ ( k ) converges to and x ( k ) converges to a point ˜ x ∞ such that ˜ x ∞ = L ˜ x ∞ . Since the right eigenvector of L corresponding to the eigenvalue must be of the form c forsome constant c ∈ R , we have ˜ x ∞ = c for some c , andhence, consensus is achieved.Recall that without quantization the consensus is achievedat the point x ∞ = ( w T ℓ x ) , where recall that w ℓ is the lefteigenvector of L and x = [ x , . . . , x N ] is the initial state ofthe agents. Although, ˜ x ∞ may differ from x ∞ , we can boundthe difference between x ∞ and ˜ x ∞ as follows. Use (50) towrite x ( k ) = L k x (0) − k − X t =0 L k − − t ∆ q ( ξ ( t ) , t + 1)= L k ( x + ∆ q ( x , − k − X t =0 L k − − t ∆ q ( ξ ( t ) , t + 1) . Now, recall that x ∞ = lim k →∞ L k x and hence, from the lastequation, k ˜ x ∞ − x ∞ k = k lim k →∞ x ( k ) − lim k →∞ L k x k = lim k →∞ k x ( k ) − L k x k ( a ) ≤ k ∆ q ( x , k + lim k →∞ k − X t =0 k ∆ q ( ξ ( t ) , t + 1) k ( b ) ≤ √ N β (0)2 b +1 + lim k →∞ √ N ¯ β b +1 η − η k +1 − η = (cid:18) β (0) + ¯ βη − η (cid:19) √ N b +1 , where to obtain inequality ( a ) , we have used the fact that k L k ≤ , and in deriving inequality ( b ) , we have used (22)and Lemma 2. This completes the proof.A PPENDIX JP ROOF OF T HEOREM i will send aquantized version of ξ i ( k ) to its outgoing neighbors. There-fore, the adversary i intercepting the inter-agent communica-tions will have access to ξ qi ( k ) instead of ξ i ( k ) . Followingsimilar steps as in the proof of Theorem 5, one may verifythat the adversary i will follow the estimation dynamics (seealso (36)) ˆ x ad i ( k + 1) = ˆ x ad i ( k ) + µ i ( k ) ξ qi ( k ) . Consequently, the estimation error e ad ( k ) = x ( k ) − ˆ x ad ( k ) will have the dynamics e ad ( k + 1) = e ad ( k ) + M ( k ) ξ q ( k ) , e ad ( − 1) = 0 where recall that M ( k ) = I − diag[ µ ( k ) , · · · , µ N ( k )] .The mean E [ e ad ( k )] and the second moment Σ ad ( k ) = E [ e ad ( k ) e ad ( k ) T ] of the estimation error follow the dynamics E [ e ad ( k + 1)] = E [ e ad ( k )] + (1 − γ ) ξ q ( k ) , (51a) E [ e ad ( − ad ( k + 1) = Σ ad ( k ) + Γ( k ) , Σ ad ( − 1) = 0 , (51b) Γ( k ) = (1 − γ ) (cid:16) E [ e ad ( k )] ξ q ( k ) T + ξ q ( k ) E [ e ad ( k )] T (cid:17) + (1 − γ ) ξ q ( k ) ξ q ( k ) T + γ (1 − γ )diag[ ξ q ( k ) , . . . , ξ qN ( k ) ] , (51c)where in deriving the expression of Γ( k ) we have used the factthat E [ µ i ( k ) ] = γ and E [ µ i ( k ) µ j ( k )] = γ for i = j and forall k ∈ N . From (21b) notice that x ( k +1) = x ( k )+ ξ q ( k ) andthus, x ( k ) = P k − t = − ξ q ( k ) for all k . Therefore, from (51a) wemay write, E [ e ad ( k + 1)] = (1 − γ ) k X t = − ξ q ( t ) = (1 − γ ) x ( k + 1) . (52)Using (52), we simplify (51c) as follows Γ( k ) =(1 − γ ) ( x ( k ) ξ q ( k ) + ξ q ( k ) x ( k ) T + ξ q ( k ) ξ q ( k ))+ γ (1 − γ )diag[ ξ q ( k ) , . . . , ξ qN ( k ) ]=(1 − γ ) ( x ( k + 1) x ( k + 1) T − x ( k ) x ( k ) T )+ γ (1 − γ )diag[ ξ q ( k ) , . . . , ξ qN ( k ) ] , for all k ∈ N , and for k = − , (51c) yields Γ( −