Randomized Consensus with Attractive and Repulsive Links
Guodong Shi, Alexandre Proutiere, Mikael Johansson, Karl H. Johansson
aa r X i v : . [ c s . S Y ] S e p Randomized Consensus with Attractive and Repulsive Links
Guodong Shi, Alexandre Proutiere, Mikael Johansson, and Karl H. Johansson
Abstract — We study convergence properties of a randomizedconsensus algorithm over a graph with both attractive andrepulsive links. At each time instant, a node is randomly selectedto interact with a random neighbor. Depending on if the linkbetween the two nodes belongs to a given subgraph of attractiveor repulsive links, the node update follows a standard attractiveweighted average or a repulsive weighted average, respectively.The repulsive update has the opposite sign of the standardconsensus update. In this way, it counteracts the consensusformation and can be seen as a model of link faults or maliciousattacks in a communication network, or the impact of trust andantagonism in a social network. Various probabilistic conver-gence and divergence conditions are established. A thresholdcondition for the strength of the repulsive action is given forconvergence in expectation: when the repulsive weight crossesthis threshold value, the algorithm transits from convergenceto divergence. An explicit value of the threshold is derived forclasses of attractive and repulsive graphs. The results showthat a single repulsive link can sometimes drastically changethe behavior of the consensus algorithm. They also explicitlyshow how the robustness of the consensus algorithm dependson the size and other properties of the graphs.
Keywords: random networks, consensus algorithms, gos-siping, sensor networks, opinion dynamics, social networksI. I
NTRODUCTION
Distributed consensus algorithms have been serving asbasic models of information dissemination and aggregationover complex networks throughout a wide area of sciencesincluding social sciences, engineering, and biology, e.g.,opinion dynamics over social networks [7]–[11], parallelcomputation and data fusion for sensor networks [12]–[15],formation control in robotic networks [16]–[19], and flockingof animal groups [20], [21].In a typical consensus algorithm, a node collects informa-tion from a subset of nodes in the network called neighborsand updates its state following an “attractive” rule, a convexcombination of its own and the neighbors’ previous states.The neighbor relations and communication are often random,which lead to random consensus algorithms. The conver-gence of random consensus algorithms have been extensivelystudied in the literature [22]– [37]. A great advantage fordistributed consensus seeking lies in the fact that it is robustwith respect to link failures and communication noise [30],[32]–[35]. Moreover, due to the attractive update, differentprobabilistic convergence concepts often coincide for randomconsensus algorithms [28].
The authors are with ACCESS Linnaeus Centre, School of ElectricalEngineering, Royal Institute of Technology, Stockholm 10044, Sweden.e-mail: [email protected], [email protected], [email protected], [email protected] work has been supported in part by the Knut and Alice WallenbergFoundation, the Swedish Research Council and KTH SRA TNG.
Few works have discussed the influence of “repulsive”links in the network on the consensus formation despite themany motivations for doing so. In social networks, signedgraphs were introduced for formulating the tensions and con-flicts between individuals. Links representing interpersonalconnection were associated with a sign which indicates ifthe mutual relationship is friendship or hostility [44]–[46].In sensor networks, the communication links can be taken byattackers so that data can be injected to oppose consensus[42]. In collaborative networks, malicious users may existwhose objective is to damage the network and increase thecost incurred by the legitimate users [43].In [47], a class of antagonistic interactions modeled asnegative weights in the update law were studied in acontinuous-time setting, and necessary and sufficient condi-tions were derived for consensus over the network in absolutevalue. In [48], a randomized model was formulated whereeach node executes an attraction, repulsion or neglect updateat random when meeting other nodes.In this paper, we study a random consensus model withboth attractive and repulsive links in the underlying commu-nication network. Contrary to the model in [48], where attrac-tive and repulsive updates are selected at random, the modelin this paper allows the update type to be selected basedon predetermined inter-node relations. We use a gossipingmodel to define how nodes are selected for updating [38]–[41]. In each time slot, a random node is selected to interactwith a random neighbor. The node updates its state followingstandard attractive weighted average or repulsive weightedaverage, determined by whether the link is attractive orrepulsive. Our main contributions are the following. • We establish various conditions for convergence ordivergence in expectation, in mean square, and almostsurely. In contrast to the standard consensus model with-out repulsive updates, some fundamental differencesshow up in these probabilistic modes. • We show that under mild assumptions there is a thresh-old value for the strength of the repulsive action forwhich the convergence in expectation changes: whenthe repulsive weight crosses this threshold, the random-ized consensus algorithm transits from convergence todivergence. The explicit value of the threshold is derivedfor classes of attractive and repulsive graphs. • We establish a no-survivor theorem for almost suredivergence, which indicates that a single repulsive linkcan drastically change the behavior of the overall net-work.The paper is organized as follows. Section II introduceshe network model and defines the problem of interest. Sec-tion III discusses convergence and divergence in expectationand shows that there is a threshold value for phase transition.Example graphs are studied and explicit threshold values arederived. Sections IV and V present mean-square and almostsure convergence and divergence conditions, respectively.Finally concluding remarks are given in Section VI.II. P
ROBLEM D EFINITION
In this section, we present the considered network modeland define the problem of interest. We first recall some basicdefinitions from graph theory [3] and stochastic matrices [1].A directed graph (digraph) G = ( V , E ) consists of a finiteset V = { , . . . , n } of nodes and an arc set E ⊆ V × V . Anelement e = ( i, j ) ∈ E is an arc from node i ∈ V to j ∈ V .A digraph G is bidirectional if for every two nodes i and j , ( i, j ) ∈ E if and only if ( j, i ) ∈ E . A finite square matrix M = [ m ij ] ∈ R n × n is called stochastic if m ij ≥ for all i, j and P j m ij = 1 for all i . A stochastic matrix M is doublystochastic if also M T is stochastic. Let P = [ p ij ] ∈ R n × n be a matrix with nonnegative entries. We can associate aunique digraph G P = ( V , E P ) with P on node set V suchthat ( j, i ) ∈ E P if and only if p ij > . We call G P the induced graph of P . A. Node Pair Selection
Consider a network with node set V = { , . . . , n } , n ≥ . Let the digraph G = ( V , E ) denote the underlying graph of the considered network. The underlying graphindicates potential interactions between nodes. We use theasynchronous time model introduced in [40] to describe nodeinteractions. Each node meets other nodes at independenttime instances defined by a rate-one Poisson process. Thisis to say, the inter-meeting times at each node follows arate-one exponential distribution. Without loss of generality,we can assume that at most one node is active at any giveninstance. Let x i ( k ) ∈ R denote the state (value) of node i atthe k ’th meeting slot among all the nodes.Node interactions are characterized by an n × n matrix P = [ p ij ] , where p ij ≥ for all i, j = 1 , . . . , n and p ij > if and only if ( j, i ) ∈ E . We assume P to be a stochasticmatrix. Without loss of generality we suppose p ii = 0 for all i . In other words, the underlying graph G the induced graphof the matrix P . The meeting process is defined as follows. Definition 1 (Node Pair Selection):
Independent of timeand node state, at time k ≥ ,(i) A node i ∈ V is drawn with probability /n ;(ii) Node i picks node j with probability p ij .In this way, we say arc ( j, i ) is selected. B. Attractive and Repulsive Graphs
We assign a partition of the underlying graph G intotwo disjoint subgraphs, G att and G rep , namely, the attractivegraph and the repulsive graph. To be precise, G att = ( V , E att ) and G rep = ( V , E rep ) are two graphs over node set V satisfying E att ∩ E rep = ∅ and E att ∪ E rep = E . Underthis graph partition the node pair selection matrix P can be naturally written as P = P att + P rep , for which G att is theinduced graph of P att , and G rep is the induced graph of P rep .Suppose arc ( j, i ) is selected at time k . Node j keeps itsprevious state, and node i updates its state following the rule:(i) (Attraction) If ( j, i ) ∈ E att , node i updates as a weightedaverage with j : x i ( k + 1) = (1 − α k ) x i ( k ) + α k x j ( k ) , (1)where ≤ α k ≤ .(ii) (Repulsion) If ( j, i ) ∈ E rep , node i updates as aweighted average with j , but with a negative coefficient: x i ( k + 1) = (1 + β k ) x i ( k ) − β k x j ( k ) , (2)where β k ≥ . C. Problem of Interest
We introduce the following definition.
Definition 2: (i) Consensus convergence for initial value x ∈ R n is achieved • in expectation if lim k →∞ (cid:12)(cid:12) E (cid:2) x i ( k ) − x j ( k ) (cid:3)(cid:12)(cid:12) = 0 forall i and j ; • in mean square if lim k →∞ E (cid:2) x i ( k ) − x j ( k ) (cid:3) = 0 forall i and j ; • almost surely if P (cid:0) lim k →∞ | x i ( k ) − x j ( k ) | = 0 (cid:1) = 1 for all i and j .(ii) Consensus divergence for initial value x ∈ R n isachieved • in expectation if lim sup k →∞ max i,j (cid:12)(cid:12) E (cid:2) x i ( k ) − x j ( k ) (cid:3)(cid:12)(cid:12) = ∞ ; • in mean square if lim sup k →∞ max i,j E (cid:2) x i ( k ) − x j ( k ) (cid:3) = ∞ ; • almost surely if for all M ≥ , P (cid:0) lim sup k →∞ max i,j | x i ( k ) − x j ( k ) | > M (cid:1) = 1 .Global consensus convergence in expectation, in meansquare, and almost surely are defined when the convergenceholds for all x in each of the three cases.III. C ONVERGENCE VS . D
IVERGENCE IN E XPECTATION
The considered randomized algorithm can be expressed as x ( k + 1) = W ( k ) x ( k ) , (3)where W ( k ) is the random matrix satisfying P (cid:16) W ( k ) = I − α k e i ( e i − e j ) T (cid:17) = p ij n , ( j, i ) ∈ E att P (cid:16) W ( k ) = I + β k e i ( e i − e j ) T (cid:17) = p ij n , ( j, i ) ∈ E rep (4)with e m = (0 . . . . . . T denoting the n × unit vectorwhose m ’th component is .Denote D att = diag( d . . . d n ) with d i = P nj =1 [ P att ] ij .Denote also D rep = diag( ¯ d . . . ¯ d n ) with ¯ d i = P nj =1 [ P rep ] ij . Define L att = D att − P att and L rep = D rep − P rep . Then L att and L rep represent the (weighted)Laplacian matrices of the attractive graph G att and repulsivegraph G rep , respective. After some simple algebra it can beshown that E W ( k ) = I − α k n L att + β k n L rep . = ¯ W k . (5) . General Conditions Introduce y i ( k ) = x i ( k ) − n P ni =1 x i ( k ) . Then y ( k ) =( y ( k ) . . . y n ( k )) T = x ( k ) − T n x ( k ) with = (1 . . . T denoting the n × vector each component of which is .Then it is straightforward to see that consensus convergencein expectation is achieved if and only if lim k →∞ E y ( k ) = 0 ,and consensus divergence in expectation is achieved if andonly if lim sup k →∞ (cid:12)(cid:12) E y ( k ) (cid:12)(cid:12) = ∞ .Let λ max ( A ) denote the largest eigenvalue for a symmetricmatrix A . We have the following result. Proposition 1:
Global consensus convergence in expecta-tion is achieved if Q ∞ k =0 λ max (cid:0) ¯ W Tk ( I − T n ) ¯ W k (cid:1) = 0 . Proof.
Since E W ( k ) is a stochastic matrix and the node pairselection is independent of the node states, we obtain E y ( k + 1) = ( I − T /n ) ¯ W k E x ( k )= ( I − T /n ) ¯ W k E y ( k ) + ( I − T /n ) ¯ W k T /n E x ( k )= ( I − T /n ) ¯ W k E y ( k ) + ( I − T /n ) T /n E x ( k )= ( I − T /n ) ¯ W k E y ( k ) . (6)Thus, noticing that ( I − T n ) = ( I − T n ) , we have (cid:12)(cid:12) E y ( k + 1) (cid:12)(cid:12) = (cid:12)(cid:12) ( I − T /n ) ¯ W k E y ( k ) (cid:12)(cid:12) ≤ k ( I − T /n ) ¯ W k k (cid:12)(cid:12) E y ( k ) (cid:12)(cid:12) = r λ max (cid:0) ¯ W Tk ( I − T n ) ¯ W k (cid:1)(cid:12)(cid:12) E y ( k ) (cid:12)(cid:12) , (7)where k·k denotes the spectral norm. The desired conclusionfollows. (cid:3) When P att and P rep are symmetric, an upper bound for q λ max (cid:0) ¯ W Tk ( I − T n ) ¯ W k (cid:1) can be easily computed with thehelp of Weyl’s inequality. We propose the following result. Proposition 2:
Suppose both P att and P rep are symmetric.Global consensus convergence in expectation is achieved if ∞ Y k =0 (cid:16) − α k n λ ( L att ) + β k n λ max ( L rep ) (cid:17) = 0 where λ ( L att ) is the second largest eigenvalue of L att . Proof.
We have r λ max (cid:0) ¯ W Tk ( I − T n ) ¯ W k (cid:1) = λ max (cid:0) ¯ W k − T n (cid:1) ≤ λ max (cid:0) I − T n − α k n L att (cid:1) + λ max ( L rep )= 1 − α k n λ ( L att ) + β k n λ max ( L rep ) , (8)where the inequality holds from Weyl’s inequality. Thedesired conclusion follows directly from Proposition 1. (cid:3) When α k and β k are time invariant, i.e., there are twoconstants ≤ α ≤ , β ≥ such that α k ≡ α and β k ≡ β for all k , based on (6), the consensus convergence inexpectation is equivalent with the stability of the followingLTI system: E y ( k + 1) = ( I − T /n ) ¯ W E y ( k ) where ¯ W = I − αn L att + βn L rep . Consequently, letting ρ ( A ) represent the spectral radius for a matrix A , i.e., the largesteigenvalue in magnitude, we have the following result. Proposition 3:
Assume that there are two constants ≤ α ≤ , β ≥ such that α k ≡ α and β k ≡ β for all k .(i). Global consensus convergence in expectation isachieved if and only if ρ (cid:0) ( I − T n ) ¯ W (cid:1) < ,(ii). Consensus divergence in expectation is achieved foralmost all initial values if and only if ρ (cid:0) ( I − T n ) ¯ W (cid:1) > . B. Phase Transition
Define f ( α, β ) , ρ (cid:0) ( I − T n ) ¯ W (cid:1) . We present thefollowing result.
Proposition 4:
Suppose G att has a spanning tree and G rep contains at least one link. Also assume that either of thefollowing two conditions holds:(i) L att L rep = L rep L att ;(ii) P att and P rep are symmetric.Then for any fixed α ∈ (0 , , there exists a thresholdvalue β ⋆ ( α ) ≥ such that • Global consensus convergence in expectation, i.e., f ( α, β ) < , is achieved if ≤ β < β ⋆ ; • Consensus divergence in expectation for almost allinitial values, i.e., f ( α, β ) > , is achieved if β > β ⋆ .When both P att and P rep are symmetric, it turns out thatsome monotonicity can be established for f . Proposition 5:
Suppose both P att and P rep are symmetric.Then f ( α, · ) is non-increasing in α for α ∈ [0 , ; f ( · , β ) isnon-decreasing in β for β ∈ [0 , ∞ ) .The proofs of Propositions 4 and 5 can be found inappendix. C. Examples: Threshold Value
We first consider the case when the underlying graph G isthe complete graph K n and each link is selected with equalprobability at any time step. We have the following results. Proposition 6:
Suppose P = n − ( T − I ) . Let ( G att , G rep ) be a given bidirectional attraction-repulsion par-tition. Then we have β ⋆ = max n(cid:0) n ( n − λ max ( L rep ) − (cid:1) α, o . The proof of Proposition 6 can be obtained straightfor-wardly from the following key lemma which indicates thatthe Laplacian matrix of the complete graph K n commuteswith that of any other bidirectional graph. Lemma 1:
Let K n be the complete graph and G be anybidirectional graph. Then there always holds L K n L G = L G L K n , where L K n and L G are the Laplacian matrices of K n and G , respectively.When the repulsive graph G rep is formed by the undirectedErd¨os-R´enyi random graph G ( n, p ) in the sense that for everyunordered pair { i, j } , ( i, j ) and ( j, i ) are repulsive links withprobability p . This gives us a sequence of random variables ξ n = ρ (cid:0) ( I − T n ) ¯ W (cid:1) , n = 1 , , , . . . . ote that induced by { ξ n } ∞ , the consensus convergence ordivergence forms a well-defined random sequence indexedby n . We propose the following result. Proposition 7:
Suppose P = n − ( T − I ) . Fix α k ≡ α ∈ (0 , and β k ≡ β ∈ (0 , ∞ ) . Let G rep be formed by theundirected Erd¨os-R´enyi random graph G ( n, p ) . Then p ⋆ = αα + β is a threshold value regarding the consensus convergence ordivergence. To be precise, we have,a) When p < p ⋆ , global consensus convergence in expec-tation is achieved in probability, i.e., lim n →∞ P ( ξ n <
1) = 1 ;b) When p > p ⋆ , consensus divergence in expectation foralmost all initial values is achieved in probability, i.e., lim n →∞ P ( ξ n >
1) = 1 .The result follows directly from the following lemma.
Lemma 2: [6] Let ∆ n be the Laplacian of the Erd¨os-R´enyi random graph G ( n, p ) . Then λ max (∆ n ) pn → in proba-bility.Next, we discuss the other extreme case when the un-derlying communication graph is the ring graph, R n , whichis nearly the most sparse connected graph. We present thefollowing result. Proposition 8:
Denote A R n as the adjacency matrix of R n . Suppose P = A R n / . Let ( G att , G rep ) be a givenbidirectional attraction-repulsion partition with G rep = ∅ .Then β ⋆ ≤ α for all n . Proof.
It is well known that L R n has eigenvalues − πk/n ) , ≤ k ≤ n/ . On the other hand, we have λ max ( L rep ) = 1 . Based on Weyl’s inequality, we obtain ρ (cid:0) ( I − T n ) ¯ W (cid:1) ≥ λ min (cid:16) I − α n L R n − T n (cid:17) + α + βn λ max ( L rep )= 1 − α (1 − cos(2 π ⌊ n ⌋ /n ) n + α + βn ≥ β − αn . (9)This means that ρ (cid:0) ( I − T n ) ¯ W (cid:1) > whenever β > α ,which proves the desired conclusion. (cid:3) IV. C
ONVERGENCE VS . D
IVERGENCE IN M EAN S QUARE
This subsection discusses the mean square convergenceand divergence for the considered algorithm. With Cauchy-Schwarz inequality, it holds that | y i | = 1 n (cid:12)(cid:12)(cid:12) n X i =1 ( x i − x j ) (cid:12)(cid:12)(cid:12) ≤ n n X i =1 (cid:12)(cid:12) x i − x j (cid:12)(cid:12) . (10)Moreover, we also have (cid:12)(cid:12) x i − x j (cid:12)(cid:12) = (cid:12)(cid:12) y i − y j (cid:12)(cid:12) ≤ (cid:0) | y i | + | y j | (cid:1) . (11)Therefore, consensus convergence in mean square isachieved if and only if lim k →∞ E | y ( k ) | = 0 , and con-sensus divergence in mean square is achieved if and only if lim sup k →∞ E | y ( k ) | = ∞ . We present the following result. Proposition 9: (i) Global consensus convergence in meansquare is achieved if Q ∞ k =0 λ max (cid:0) E (cid:2) W Tk ( I − T n ) W k (cid:3)(cid:1) = 0 .(ii) Consensus divergence in mean square is achieved foralmost all initial values if Q ∞ k =0 λ (cid:0) E (cid:2) W Tk ( I − T n ) W k (cid:3)(cid:1) = ∞ , where λ is the second largest eigenvalue. Proof.
Noticing that W k is a stochastic matrix for all possiblesamples, we obtain E (cid:16)(cid:12)(cid:12) y ( k + 1) (cid:12)(cid:12) (cid:12)(cid:12)(cid:12) y ( k ) (cid:17) = E (cid:16) y ( k ) T W Tk ( I − T n ) W k y ( k ) (cid:12)(cid:12)(cid:12) y ( k ) (cid:17) = E (cid:16) y ( k ) T E (cid:0) W Tk ( I − T n ) W k (cid:1) y ( k ) (cid:12)(cid:12)(cid:12) y ( k ) (cid:17) ≤ λ max (cid:0) E (cid:2) W Tk ( I − T n ) W k (cid:3)(cid:1) | y ( k ) | , (12)where the second equality holds from the fact that W k isindependent of time and the node states, and the inequalityholds from the Rayleigh-Ritz theorem. Similarly we have E (cid:16)(cid:12)(cid:12) y ( k + 1) (cid:12)(cid:12) (cid:12)(cid:12)(cid:12) y ( k ) (cid:17) ≥ λ (cid:0) E (cid:2) W Tk ( I − T n ) W k (cid:3)(cid:1) | y ( k ) | = λ (cid:0) E (cid:2) W Tk ( I − T n ) W k (cid:3)(cid:1) | y ( k ) | , where C . = { y : y = · · · = y n } , and the equality holds fromthe fact that T y ( k ) = 0 for all k . The desired conclusionfollows immediately. (cid:3) V. A
LMOST S URE C ONVERGENCE VS . D
IVERGENCE
We move to the discussion on almost sure consensusconvergence and divergence in this subsection. First we studya special case when α k ≡ . The following result holds. Proposition 10:
Suppose α k ≡ and G att has a spanningtree. Then for any sequence of { β k } ∞ , global consensus isachieved almost surely in finite time, i.e., P (cid:16) ∃ K, s . t ., x i ( k ) = x j ( k ) , i, j ∈ V , k ≥ K (cid:17) = 1 . Denoting T = inf k (cid:8) x i ( k ) = x j ( k ) , i, j ∈ V (cid:9) as theinitial time when consensus is reached, we have E T ≤ ( n − (cid:0) np ∗ (cid:1) n − , where p ∗ = min { p ij : p ij > } . Proof.
Introduce m ( k ) = min i ∈V x i ( k ); M ( k ) = max i ∈V x i ( k ) . We define M ( k ) = M ( k ) − m ( k ) . Following the consideredalgorithm M ( k ) is a Markov chain with nonnegative states.The structure of the randomized algorithm gives P (cid:16) M ( s ) = 0 , s ≥ k (cid:12)(cid:12)(cid:12) M ( k ) = 0 (cid:17) = 1 . Thus, zero is an absorbing state for M ( k ) .Since G att has a spanning tree, we can select a node i which is a root node in G att . With α k ≡ , we have P (cid:16) x i ( k + n −
1) = x i ( k ) , i ∈ V (cid:17) ≥ (cid:16) p ∗ n (cid:17) n − , (13)hich implies P (cid:16) M ( k + n −
1) = 0 (cid:12)(cid:12)(cid:12) M ( k ) > (cid:17) ≥ (cid:16) p ∗ n (cid:17) n − . (14)The Borel-Cantelli Lemma ensures that P (cid:16) ∃ k, M (cid:0) k ( n − (cid:1) = 0 (cid:12)(cid:12)(cid:12) M (0) > (cid:17) = 1 , which proves the almost sure finite-time consensus.With (14), the upper bound ( n − (cid:0) np ∗ (cid:1) n − of E T canbe obtained by direct calculation of the expected value of theinitial success time for a sequence of i.i.d. Bernoulli trialswith success probability (cid:0) p ∗ n (cid:1) n − . The proof is finished. (cid:3) Clearly if { β k } ∞ is sufficiently large, both consensusdivergence in expectation and in the mean square are possiblewhen α k ≡ . Hence with repulsive links, the various notionsof convergence are not equivalent, which contrasts with thecase where all links are attractive. Proposition 11:
Suppose G att has a spanning tree. Globalconsensus convergence is achieved almost surely if(i) there exists β ∗ > such that β k ≤ β ∗ for all k ,(ii) ≤ Φ k ( n − ≤ with P ∞ k =0 Φ k ( n − = ∞ ,where Φ s = 1 − (cid:0) − Q s + n − k = s α k (cid:1)(cid:0) p ∗ n (cid:1) n − − (cid:0) − (cid:16) p ∗ n (cid:1) n − (cid:17) Q s + n − k = s (cid:0) β k (cid:1) . Proof.
The proof is based on a similar martingale argumentas [31]. Let i be a root node in G att . Take k ≥ . Assumethat x i ( k ) ≤ m ( k ) + M ( k ) . Since i is a root node,there is node i different from i such that ( i , i ) ∈ E att . Ifarc ( i , i ) is selected at time k , we have x i ( k + 1) ≤ (1 − α k ) M ( k ) + α k (cid:0) m ( k ) + 12 M ( k ) (cid:1) = α k m ( k ) + (1 − α k M ( k ) . (15)Similarly, there is node i , different from i and i , such thatat least one of ( i , i ) ∈ E att and ( i , i ) ∈ E att holds. If ( i , i ) is selected at time k , and either ( i , i ) or ( i , i ) ∈E att is selected at time k + 1 , we have x i ( k + 2) ≤ (1 − α k +1 ) x i ( k + 1)+ α k +1 max { x i ( k + 1) , x i ( k + 1) } = α k α k +1 m ( k ) + (cid:0) − α k α k +1 (cid:1) M ( k ) . (16)The process can be continued since G att has a spanning tree,and with a proper choice of arcs in E att for k + 2 , . . . , k + n − , we have m ( k + n −
1) = m ( k ) and x i ( k + n − ≤ Q k + n − k = k α k m ( k )+ (cid:0) − Q k + n − k = k α k (cid:1) M ( k ) , i ∈ V , (17)which yield P (cid:16) M ( k + n − ≤ (cid:0) − Q k + n − k = k α k (cid:1) M ( k ) (cid:17) ≥ (cid:16) p ∗ n (cid:17) n − . (18) For the other case with x i ( k ) > m ( k ) + M ( k ) , wecan apply the same analysis on z i ( k ) with z i ( k ) = − x i ( k ) and (20) still holds.On the other hand, the structure of the algorithm ensuresthat P (cid:16) M ( k + n − ≤ k + n − Y k = k (cid:0) β k (cid:1) M ( k ) (cid:17) = 1 . (19)In light of (18) and (19), we obtain E (cid:16) M ( k + n − (cid:12)(cid:12)(cid:12) M ( k ) (cid:17) ≤ (cid:2) − Φ k (cid:3) M ( k ) . (20)We invoke the supermartingale convergence theorem tocomplete the final piece of the proof. Lemma 3: [4] Let V k , k ≥ be a sequence of nonnega-tive random variables with E V < ∞ . If E (cid:16) V k +1 (cid:12)(cid:12)(cid:12) V , . . . , V k (cid:17) ≤ (1 − c k ) V k with c k ∈ [0 , and P ∞ k =0 c k = ∞ , then lim k →∞ V k = 0 almost surely.With Lemma 3 and (20), we have lim k →∞ M (cid:0) ( n − k (cid:1) =0 almost surely if ≤ Φ k ( n − ≤ and P ∞ k =0 Φ k ( n − = ∞ . Noticing the boundedness of β k , the desired conclusionfollows immediately. (cid:3) For almost sure divergence, we first present the followingresult which indicates that as long as almost sure divergenceis achieved, then no node can “survive” if the attractive graphis strongly connected.
Proposition 12:
Suppose G att is strongly connected andconsensus divergence is achieved almost surely. Suppose alsothere exists α ∗ > such that α k ≥ α ∗ for all k . Then P (cid:16) lim sup k →∞ | x i ( k ) − x j ( k ) | > M ∗ (cid:17) = 1 (21)for all i , j , and M ∗ ≥ . Proof.
Suppose consensus divergence is achieved almostsurely. Fix M ∗ > . Then we can find two nodes i ∗ and j ∗ ,such that almost surely, there exist a sequence k < k < . . . satisfying x i ∗ ( k m ) = m ( k m ) , x j ∗ ( k m ) = M ( k m ) , and | x i ∗ ( k m ) − x j ∗ ( k m ) | = M ( k m ) ≥ M ∗ for all m = 1 , , . . . .Now take two nodes i ∗ and j ∗ . Since G att is stronglyconnected, there are directed paths i ∗ i . . . i τ i ∗ and j ∗ j . . . j τ j ∗ in G att , where ≤ τ , τ ≤ n − . Weimpose a recursive argument to establish an upper boundfor x i ∗ ( k m + ( τ + 1) d ) . Take µ ∈ (0 , and define d = inf { d : (1 − α ∗ ) d ≤ µ } .Suppose ( i ∗ , i ) is selected at time steps k m , . . . , k m + d − , ( i , i ) is selected at time steps k m + d , . . . , k m +2 d − ,etc. We can obtain by recursive calculation that x i ∗ (cid:0) k m + ( τ + 1) d (cid:1) ≤ (cid:0) − µ (cid:1) τ +1 m ( k m )+ (cid:16) − (cid:0) − µ (cid:1) τ +1 (cid:17) M ( k m ) . (22)A lower bound for x j ∗ (cid:0) k m + ( τ + τ + 2) d (cid:1) can beestablished using the same argument as x j ∗ (cid:0) k m + ( τ + τ + 2) d (cid:1) ≥ (cid:0) − µ (cid:1) τ +1 M ( k m )+ (cid:16) − (cid:0) − µ (cid:1) τ +1 (cid:17) m ( k m ) (23)or a proper selection of arcs during time steps k m + ( τ +1) d , . . . , k m + ( τ + τ + 2) d − .We can compute the probability of such selection ofsequence of arcs in the previous recursive estimate, and weconclude from (22) and (23) that P (cid:16) x j ∗ (cid:0) k m + D (cid:1) − x i ∗ (cid:0) k m + D (cid:1) ≥ (cid:2)(cid:0) − µ (cid:1) τ +1 + (cid:0) − µ (cid:1) τ +1 − (cid:3) M ( k m ) ≥ (cid:2)(cid:0) − µ (cid:1) τ +1 + (cid:0) − µ (cid:1) τ +1 − (cid:3) M ∗ (cid:17) ≥ (cid:16) p ∗ n (cid:17) ( τ + τ +2) d (24)where D = ( τ + τ +2) d . Since µ is arbitrarily chosen wecan always assume (cid:0) − µ (cid:1) τ +1 + (cid:0) − µ (cid:1) τ +1 − > / .Thus, (24) is reduced to P (cid:16) x j ∗ (cid:0) k m + D (cid:1) − x i ∗ (cid:0) k m + D (cid:1) ≥ M ∗ (cid:17) ≥ (cid:16) p ∗ n (cid:17) ( τ + τ +2) d , m = 1 , , . . . . (25)Noting the fact that the events (cid:8) x j ∗ (cid:0) k m + D (cid:1) − x i ∗ (cid:0) k m + D (cid:1) ≥ M ∗ (cid:9) are determined by the node pair selectionprocess, which is independent of time and node states, theBorel-Cantelli Lemma ensures that almost surely, we canselect an infinite subsequence k m s , s = 1 , , . . . , such that x j ∗ (cid:0) k m s + D (cid:1) − x i ∗ (cid:0) k m s + D (cid:1) ≥ M ∗ , , , . . . . This has proved that P (cid:16) lim sup k →∞ | x i ∗ ( k ) − x j ∗ ( k ) | > M ∗ (cid:17) = 1 for all M ∗ ≥ . (26)Since i ∗ and j ∗ are arbitrarily chosen, we have completedthe proof. (cid:3) Proposition 12 shows that divergence is also propagatedamong the network between any two nodes. Denoting p ∗ =max { p ij : p ij > } and E = |E att | . We end the discussionof this section by presenting the following almost suredivergence result. Proposition 13:
Suppose G rep is weakly connected.Global consensus convergence is achieved almost surely if(i) there exists α ∗ < such that α k ≤ α ∗ for all k ;(ii) there exists β ∗ > such that β k ≤ β ∗ for all k ;(iii) there exists an integer Z ≥ such that P tm =0 Q ( t ) = O ( t ) , where for m = 0 , , . . . , Q ( m ) . = (cid:16) p ∗ n (cid:17) Z h log n − + P ( m +1) Z − k = mZ log(1 + β k ) i + (cid:16) − (cid:0) − p ∗ n ) E Z (cid:17)h P ( m +1) Z − k = mZ log (cid:0) − α k (cid:1)i . Proof.
Since α k < for all k , the structure of the algorithmautomatically implies that M ( k ) > is a sure event forall k . As a result, we can well define a sequence of randomvariables ζ k = M ( k +1) M ( k ) , k = 0 , , . . . as long as M (0) > .From the considered randomized algorithm, it is easy to seethat P (cid:16) ζ k = M ( k + 1) M ( k ) ≥ − α k (cid:17) = 1 (27)and P (cid:0) ζ k < (cid:1) ≤ P (cid:0) one attractive arcs is selected (cid:1) . More-over, since G rep is weakly connected, for any time k , there must be two nodes i and j with ( i, j ) ∈ E rep such that | x i ( k ) − x j ( k ) | ≥ M ( k ) n − . Thus, if such ( i, j ) is selected fortime k, k + 1 , k + Z − , we have M ( k + Z ) ≥ (cid:12)(cid:12) x i ( k + Z ) − x j ( k + Z ) (cid:12)(cid:12) ≥ M ( k ) n − Z − Y s =0 (1 + β k + s ) . (28)Now we define a new sequence of random variablesassociated with the node pair selection process, χ Zm , m =0 , , . . . , bya) χ Zm = Q ( m +1) Z − k = mZ (cid:0) − α k (cid:1) , if at least one attractivearcs is selected for k ∈ [ mZ, . . . , ( m + 1) Z − ;b) χ Zm = n − Q ( m +1) Z − k = mZ (1 + β k ) , if one repulsive arc ( i, j ) satisfying | x i ( mZ ) − x j ( mZ ) | ≥ M ( mZ ) n − isselcted for all k ∈ [ mZ, . . . , ( m + 1) Z − c) χ Zm = 1 otherwise.In light of (27) and (28), we have P (cid:16) ( m +1) Z − Y k = mZ ζ k ≥ χ Zm , m = 0 , , , . . . (cid:17) = 1 . (29)From direct calculation according to the definition of χ Zm ,we have E (cid:16) log χ Zm (cid:17) ≥ Q ( m ) .We next invoke an argument from the strong law of largenumbers to show that P (cid:0) lim sup t →∞ t X m =0 log χ Zm = ∞ (cid:1) = 1 . (30)Suppose there exist two constants M ≥ and < q < such that P (cid:16) lim sup t →∞ P tm =0 log χ Zm ≤ M (cid:17) ≥ q . Thisleads to P (cid:16) lim sup t →∞ P tm =0 log χ Zm t ≤ (cid:17) ≥ q. (31)On the other hand, noting that the node updates areindependent of time and node states, and that V (log χ Zm ) is bounded in light of the bounds of α k and β k , the stronglaw of large numbers suggests that P (cid:16) lim t →∞ t t X m =0 (cid:0) log χ Zm − Q ( m ) (cid:1) ≥ (cid:17) ≥ P (cid:16) lim m →∞ t t X m =0 (cid:0) log χ Zm − E log χ Zm (cid:1) = 0 (cid:17) = 1 , which contradicts (31) if lim sup m →∞ P tm =0 Q ( m ) = O ( t ) .Thus, (30) is proved.The final piece of the proof is based on (29). With thedefinition of ζ k , (29) yields P (cid:16) log M (cid:0) ( t + 1) Z (cid:1) − log M (cid:0) (cid:1) = ( t +1) Z − X k =0 log ζ k ≥ t X m =0 log χ Zm , t = 0 , , , . . . (cid:17) = 1 . (32)This gives us P (cid:16) lim sup t →∞ log M (cid:0) ( t + 1) Z (cid:1) = ∞ (cid:17) = 1 in light of (30). The desired conclusion holds. (cid:3) I. C
ONCLUSIONS
A randomized consensus algorithm with both attractiveand repulsive links has been studied under an asymmetricgossiping model. The repulsive update was defined in thesense that a negative instead of a positive weight is imposedin the update. This model can represent the influence ofcertain link faults or malicious attacks in a communicationnetwork, or the spreading of trust and antagonism in a socialnetwork. We established various conditions for probabilisticconvergence or divergence, and proved the existence of aphase-transition threshold for convergence in expectation.An explicit value of the threshold was derived for classesof attractive and repulsive graphs. Future work includes theanalysis for the symmetric update model and the structureoptimization of the repulsive graph so that the maximumdamage can be created for the network.A
PPENDIX
Lemma 4:
Let A , B be two matrices in R n × n . Suppose k · k ∗ is any matrix norm. Then g ( λ ) , k A + λB k ∗ , λ ∈ R is a convex function in λ . Proof.
Noting that g (cid:0) tλ + (1 − t ) λ (cid:1) = (cid:13)(cid:13)(cid:13) t (cid:0) A + λ B (cid:1) + (1 − t ) (cid:0) A + λ B (cid:1)(cid:13)(cid:13)(cid:13) ∗ ≤ (cid:13)(cid:13)(cid:13) t (cid:0) A + λ B (cid:1)(cid:13)(cid:13)(cid:13) ∗ + (cid:13)(cid:13)(cid:13) (1 − t ) (cid:0) A + λ B (cid:1)(cid:13)(cid:13)(cid:13) ∗ = tg ( λ ) + (1 − t ) g ( λ ) , (33)for all t ∈ [0 , and λ , λ ∈ R , the lemma is proved. (cid:3) Lemma 5:
Let
K > be a positive constant. Let M K bea subset of matrices in R n × n such that(i) | M ij | ≤ K for all M ∈ M K ;(ii) M M = M M for all for all M , M ∈ M K .Then for any ǫ > , there is a matrix norm k · k ⋆ such that ρ ( A ) ≤ k A k ⋆ ≤ ρ ( A ) + ǫ (34)for all A ∈ M K . Proof.
Based on the simultaneous triangularization theorem(Theorem 2.3.3, [2]), there is a unitary matrix U ∈ C n × n such that U ∗ M U is upper triangular for every M ∈ M K since M K is a commuting family. Similar to the proofof Lemma 5.6.10 in [2], we set D t = diag( t, t , . . . , t n ) and the desired matrix norm k · k ⋆ is obtained by k B k ⋆ = k D t U ∗ BU D − t k taking t sufficiently large. (cid:3) Proof of Proposition 4.
Case (i). Given ǫ > and α ∈ [0 , .Take K ∗ > . If L att L rep = L rep L att , it is easy to seethat M K ∗ . = { ( I − T n ) ¯ W : β ≤ K ∗ } is a commutingfamily satisfying the conditions in Lemma 5. Let k·k ⋆ be thematrix norm established in Lemma 5. We introduce g ⋆ ( β ) , (cid:13)(cid:13)(cid:13) ( I − T n ) ¯ W (cid:13)(cid:13)(cid:13) ⋆ . When G att has a spanning tree, it is wellknown that f ( α,
0) = ρ (cid:16) ( I − T n )( I − αn L att ) (cid:17) = 1 − δ < for some ≤ δ < . This gives us g ⋆ (0) ≤ ρ (cid:16) ( I − T n )( I − αn L att ) (cid:17) + ǫ ≤ − δ + ǫ (35) in light of Lemma 5.Now we take K ∗ ≥ β > β ≥ . We make the followingclaim. Claim. f ( α, β ) > if f ( α, β ) > .The fact that f ( α, β ) > leads to that g ⋆ ( β ) ≥ f ( α, β ) > . According to Lemma 4, g ⋆ ( · ) is a convexfunction, which implies that g ⋆ ( β ) ≤ (1 − β β ) g ⋆ (0) + β β g ⋆ ( β ) . (36)With (35) and (36), we conclude that g ⋆ ( β ) ≥ (cid:16) β β − (cid:17) δ − (cid:16) β β − (cid:17) ǫ, (37)which in turn yields that f ( α, β ) ≥ g ⋆ ( β ) − ǫ ≥ (cid:16) β β − (cid:17) δ − (cid:16) β β (cid:17) ǫ, (38)again based on Lemma 5. Noting the fact that ǫ in (38) canbe chosen arbitrarily small, and that δ , β /β , and f ( α, β ) do not rely on the choice of k · k ⋆ , we have proved that f ( α, β ) > . The claim holds.Next, we introduce Z β = { β ≥ f ( α, β ) > (cid:9) . First ofall Z β is nonempty when L rep contains at least one link dueto the simple fact that lim β →∞ f ( α, β ) = ∞ . Secondly β ⋆ , inf (cid:8) β ≥ f ( α, β ) > (cid:9) (39)is a finite number in light of the claim we just established. Itis straightforward to verify that f ( α, β ) > when β > β ⋆ .The fact that f ( α, β ) < when β < β ⋆ can be proved via asymmetric argument. This completes the proof for case (i).Case (ii). From Lemma 4 and the fact that f ( α, β ) = k ( I − T n ) ¯ W k , f ( α, · ) is a convex function in α ; f ( · , β ) is a convex function in β . The desired conclusion followsimmediately. (cid:3) Proof of Proposition 5.
According to the proof of Proposition2, the eigenvalues of ( I − T n ) ¯ W are all nonnegative when α ∈ [0 , , β ∈ [0 , ∞ ) , and both P att and P rep . The Courant-Fischer Theorem guarantees that ρ (cid:0) ( I − T n ) ¯ W (cid:1) = max | z | =1 z T (cid:16) I − αn L att + βn L rep − T n (cid:17) z = max | z | =1 (cid:20) − αn X ( i,j ) ∈E att ,i Introduction to Matrix Analytic Methodsin Stochastic Modeling. Matrix Analysis. Cambridge UniversityPress, 1985.[3] C. Godsil and G. Royle. Algebraic Graph Theory. New York: Springer-Verlag, 2001.[4] B. T. Polyak. Introduction to Optimization . Optimization Software,New York: NY, 1987.[5] P. Erd¨os and A. R´enyi, “On the evolution of random graphs,” Pub-lications of the Mathematical Institute of the Hungarian Academy ofSciences , pp. 17-61, 1960.[6] X. Ding and T. Jiang, “Spectral distributions of adjacency and Lapla-cian matrices of random graphs,” The Annals of Applied Probability ,vol. 20, no. 6, pp. 2086-2117, 2010.[7] M. H. DeGroot, “Reaching a consensus,” Journal of the AmericanStatistical Association , vol. 69, no. 345, pp. 118-121, 1974.[8] P. M. DeMarzo, D. Vayanos, J. Zwiebel, “Persuasion bias, social influ-ence, and unidimensional opinions,” Quarterly Journal of Economics ,vol. 118, no. 3, pp. 909-968, 2003.[9] D. Acemoglu, A. Ozdaglar and A. ParandehGheibi, “Spread of(Mis)information in social networks,” Games and Economic Behavior ,vol. 70, no. 2, pp. 194-227, 2010.[10] D. Acemoglu, G. Como, F. Fagnani, A. Ozdaglar, “Opinion fluc-tuations and persistent disagreement in social networks,” in IEEEConference on Decision and Control , pp. 2347-2352, Orlando, 2011.[11] V. D. Blondel, J. M. Hendrickx and J. N. Tsitsiklis, “Continuous-time average-preserving opinion dynamics with opinion-dependentcommunications,” SIAM Journal on Control and Optimization , vol.48, no. 8, pp. 5214-5240, 2010.[12] S. Muthukrishnan, B. Ghosh, and M. Schultz, “First and second orderdiffusive methods for rapid, coarse, distributed load balancing,” Theoryof Computing Systems , vol. 31, pp. 331-354, 1998.[13] R. Diekmann, A. Frommer, and B. Monien, “Efficient schemes fornearest neighbor load balancing,” Parallel Computing , vol. 25, pp.789-812, 1999.[14] A.-M. Kermarrec, L. Massouli´e, and A. J. Ganesh, “Probabilisticreliable dissemination in large-scale systems,” IEEE Transactions onParallel and Distributed Systems , vol. 14, no. 3, pp. 248-258, 2003.[15] D. W. Soh, W. P. Tay, and T. Q. S. Quek, “Randomzied informationdissemination in dynamic environments,” IEEE/ACM Transactions onNetworking , in press.[16] J. Tsitsiklis, D. Bertsekas, and M. Athans, “Distributed asynchronousdeterministic and stochastic gradient optimization algorithms,” IEEETrans. Autom. Control , vol. 31, pp. 803-812, 1986.[17] A. Jadbabaie, J. Lin, and A. S. Morse, “Coordination of groups ofmobile autonomous agents using nearest neighbor rules,” IEEE Trans.Autom.Control , vol. 48, no. 6, pp. 988-1001, 2003.[18] R. Olfati-Saber, J. A. Fax, and R. M. Murray, “Consensus andcooperation in networked multi-agent systems,” Proc. IEEE , vol. 95,no. 1, pp. 215-233, 2007.[19] W. Ren and R. Beard, “Consensus seeking in multi-agent systemsunder dynamically changing interaction topologies,” IEEE Trans.Autom. Control , vol. 50, no. 5, pp. 655-661, 2005.[20] I. D. Couzin, J. Krause, N. Franks, and S. Levin, “Effective leadershipand decision-making in animal groups on the move, Nature , vol. 433,no. 7025, pp. 513-516, 2005.[21] R. Olfati-Saber, “Flocking for multi-agent dynamic systems: algo-rithms and theory,” IEEE Trans. Autom. Control , vol. 51, no. 3, pp.401-420, 2006.[22] Y. Hatano and M. Mesbahi, “Agreement over random networks,” IEEETrans. on Autom. Control , vol. 50, no. 11, pp. 1867-1872, 2005.[23] A. Tahbaz-Salehi and A. Jadbabaie, “A necessary and sufficient con-dition for consensus over random networks,” IEEE Trans. on Autom.Control , vol. 53, no. 3, pp. 791-795, 2008.[24] R. Carli, F. Fagnani, A. Speranzon, and S. Zampieri, “Communicationconstraints in the average consensus problem,” Automatica , vol. 44, pp.671684, 2008.[25] F. Fagnani and S. Zampieri, “Randomized consensus algorithms overlarge scale networks,” IEEE J. on Selected Areas of Communications ,vol. 26, no.4, pp. 634-649, 2008.[26] B. Touri and A. Nedi´c, “On ergodicity, infinite flow and consensus inrandom models,” IEEE Transactions on Automatic Control , vol. 56,no. 7, pp. 1593-1605, 2011. [27] G. Shi and K. H. Johansson, “The role of persistent graphs in theagreement seeking of social networks,” IEEE Journal on SelectedAreas in Communications , Special Issue on Emerging Techonlogiesin Communications, vol. 31, no. 9, pp. 595-606, 2013.[28] Q. Song, G. Chen, and D. W. C. Ho, “On the equivalence and conditionof different consensus over a random network generated by i.i.d.stochastic matrices,” IEEE Trnas. Automatic Control , vol. 56, no.5,pp. 1203-1207, 2011.[29] S. Kar and J. M. F. Moura, “Sensor networks with random links:topology design for distributed consensus,” IEEE Transactions onSignal Processing , vol.56, no.7, pp. 3315-3326, 2008.[30] S. Kar and J.M.F. Moura, “Distributed consensus algorithms in sensornetworks with imperfect communication: link failures and channelnoise,” IEEE Transactions on Signal Processing , vol.57, no. 5, pp.355-369, 2009.[31] T. C. Aysal and K. E. Barner, “Convergence of consensus models withstochastic disturbances,” IEEE Trans. Inf. Theory , vol. 56, no. 8, pp.4101-4113, 2010.[32] T. Li and J.-F. Zhang, “Consensus conditions of multi-agent systemswith time-varying topologies and stochastic communication noises,” IEEE Trans. Autom. Control , vol. 55, no. 9, pp. 2043-2057, 2010.[33] M. Huang and J. H. Manton, “Stochastic consensus seeking with noisyand directed inter-agent communication: fixed and randomly varyingtopologies,” IEEE Trans. Autom. Control , vol. 55, no. 1, pp. 235-241,2010.[34] G. Shi and K. H. Johansson, “Randomized optimal consensus of multi-agent systems,” Automatica , vol. 48, no.12, pp. 3018-3030, 2012.[35] G. Shi and K. H. Johansson, “Multi-agent robust consensus-Part I:Convergence analysis,” pp. 5744-5749, 2011.[36] I. Matei, N. Martins, and J. S. Baras, “Almost sure convergence toconsensus in Markovian random graphs, in Proc. 47th IEEE CDC ,Cancun, Mexico, Dec. 2008, pp. 3535-3540.[37] S. Patterson, B. Bamieh and A. El Abbadi, “Convergence rates ofdistributed average consensus with stochastic link failures,” IEEETrans. Autom. Control , vol. 55, no. 4, pp. 880-892, 2010.[38] D. Kempe, A. Dobra, and J. Gehrke, “Gossip-based computationof aggregate information,” in Proc. Symp. Foundations of ComputerScience , pp. 482-491, 2003.[39] R. Karp, C. Schindelhauer, S. Shenker, and B. Vcking, “Randomizedrumor spreading,” in Proc. Symp. Foundations of Computer Science ,pp. 564-574, 2000.[40] S. Boyd, A. Ghosh, B. Prabhakar and D. Shah, “Randomized gossipalgorithms,” IEEE Trans. Inf. Theory , vol. 52, no. 6, pp. 2508-2530,2006.[41] G. Shi, M. Johansson and K. H. Johansson, “Randomized gossip algo-rithm with unreliable communication,” arXiv 1203.6028, a conferenceversion appeared in IEEE CDC, Dec. 2012.[42] A. A. C´ardenas, S. Amin, B. Sinopoli, A. Giani, A. A. Perrig, and S. S.Sastry, “Challenges for securing cyber physical systems,” in Workshopon Future Directions in Cyber-physical Systems Security , Newark, NJ,Jul. 2009.[43] G. Theodorakopoulos and J. S. Baras, “Game theoretic modeling ofmalicious users in collaborative networks,” IEEE Journal on SelectedAreas in Communications , Special Issue on Game Theory in Commu-nication Systems, Vol. 26, Issue. 7, pp. 1317-1327, 2008.[44] F. Heider, Attitudes and cognitive organization, J. Psychol , vol. 21,pp. 107-122, 1946.[45] D. Cartwright, F. Harary, Structural balance: A generalization ofHeiders theory , Psychol Rev. , vol. 63, pp. 277-292, 1956.[46] G. Facchetti, G. Iacono, and C. Altafini, “Computing global structuralbalance in large-scale signed social networks,” PNAS , 108(52):20953-8, 2011.[47] C. Altafini, “Consensus Problems on networks with antagonisticinteractions,” IEEE Trans. on Automatic Control , vol. 58, no. 4, pp.935-946, 2013.[48] G. Shi, M. Johansson, and K. H. Johansson, “How agreement anddisagreement evolve over random dynamic networks,”