Consensus in the presence of interference
11 Consensus in the presence of interference
Usman A. Khan,
Member, IEEE , Shuchin Aeron,
Member, IEEE
Abstract
This paper studies distributed strategies for average-consensus of arbitrary vectors in the presence of networkinterference. We assume that the underlying communication on any link suffers from additive interference causeddue to the communication by other agents following their own consensus protocol. Additionally, no agent knowshow many or which agents are interfering with its communication. Clearly, the standard consensus protocol doesnot remain applicable in such scenarios. In this paper, we cast an algebraic structure over the interference and showthat the standard protocol can be modified such that the average is reachable in a subspace whose dimension iscomplimentary to the maximal dimension of the interference subspaces (over all of the communication links). Todevelop the results, we use information alignment to align the intended transmission (over each link) to the null-spaceof the interference (on that link). We show that this alignment is indeed invertible, i.e. the intended transmission canbe recovered over which, subsequently, consensus protocol is implemented. That local protocols exist even when thecollection of the interference subspaces span the entire vector space is somewhat surprising.
I. I
NTRODUCTION
In this paper, we consider the design and analysis of average-consensus protocols (averaging vectors in R n ) inthe presence of network interference. Each agent, while communicating locally with its neighbors for consensus,causes an interference in other communication links. We assume that these interferences are additive and lie on low-dimensional subspaces. Such interference models have been widely used in several applications, e.g. electromagneticbrain imaging [1], magnetoencephalography [2], [3], beamforming [4], [5], and multiple-access channels [6], [7].Interference cancellation, thus, has been an important subject of study in the aforementioned areas towards designingmatched detectors, adaptive beamformers, and generalized hypothesis testing [8]–[13].As distributed architectures are getting traction, information is to be distributedly processed for the purposes oflearning, inference, and actuation. Average-consensus, thus, is a fundamental notion in distributed decision-making,see [14]–[21] among others. When the inter-agent communication is noiseless and interference-free, the standardprotocol is developed in [22]. Subsequently, a number of papers [23]–[25] consider average-consensus in imperfectscenarios. Reference [26] considers consensus with link failures and channel noise, while [27] addresses asymmetriclinks with asymmetry in packet losses. Consensus under stochastic disturbances is considered in [28], while [29]studies a natural superposition property of the communication medium and uses computation codes to achieveenergy efficient consensus. The authors are with Department of Electrical and Computer Engineering at Tufts University, Medford, Email: { khan,shuchin } @ece.tufts.edu . April 1, 2019 DRAFT a r X i v : . [ c s . S Y ] D ec In contrast to the past work outlined above, we focus on an algebraic model for network interference. We assumethat the underlying communication on any link suffers from additive interference caused due to the communicationby other agents following their own consensus protocol. The corresponding interference subspace, in general,depends on the communication link and the interfering agent. A fortiori, it is clear that if the interference by anagent is persistent in all dimensions ( R n ), there is no way to recover the true average unless schemes similar tointerference alignment [30] are used. In these alignment schemes, the data is projected onto higher dimensions suchthat the interferences and the data lie in different low-dimensional subspaces; clearly, requiring an increase in thecommunication resources.On the other hand, if the interference from each agent already lies in (possibly different) low-dimensionalsubspaces, the problem we address is whether one can exploit this low-dimensionality for interference cancellation,and subsequently, for consensus. Furthermore, we address how much information can be recovered when thecollection of the local interferences span the entire vector space, R n ? Our contribution in this context is to developinformation alignment strategies for interference cancellation and derive a class of (vector) consensus protocolsthat lead to a meaningful consensus. In particular, we show that the prospoed alignment achieves the average in asubspace whose dimension is complimentary to the maximal dimension of the interference subspaces (over all ofthe communication links).To be specific, if agent j sends x j ∈ R n to agent i , agent i actually receives x j + (cid:80) m Γ x m , with γ (cid:44) rank (Γ)
OTATION AND P RELIMINARIES
We use lowercase bold letters to denote vectors and uppercase italics for matrices (unless clear from the context).The symbols n and n are the n -dimensional column vectors of all ’s and all ’s, respectively. The identity andzero matrices of size n are denoted by I n and n × n , respectively. We assume a network of N agents indexedby, i = 1 , . . . , N , connected via an undirected graph, G = ( V , E ) , where V is the set of agents, and E , is the set oflinks, ( i, j ) , such that agent j ∈ V can send information to agent i ∈ V , i.e. j → i . Over this graph, we denote theneighbors of agent i as N i , i.e. the set of all agents that can send information to agent i : N i = { j | ( i, j ) ∈ E} . In the entire paper, the initial condition at an agent, i ∈ V , is denoted by an n -dimensional vector, x i ∈ R n . Forany arbitrary vector, x i ∈ R n , we use ⊕ x i to denote the subspace spanned by x i , i.e. the collection of all α x i ,with α ∈ R . Similarly, for a matrix, A ∈ R n × n , we use ⊕ A to denote the (range space) subspace spanned by thecolumns of A : ⊕ A = (cid:40) n (cid:88) i =1 α i a i | α i ∈ R (cid:41) , A = (cid:104) a . . . a n (cid:105) . For a collection of matrices, A j ∈ R n × n , j = 1 , . . . , N , we use ⊕ j A j to denote the subspace spanned by all ofthe columns in all of the A j ’s: let A j = (cid:104) a j . . . a jn (cid:105) , then ⊕ j A j = N (cid:88) j =1 β j n (cid:88) i =1 α i a ji | α i , β j ∈ R . Let rank ( A ) = γ, for some non-negative integer, γ ≤ n , then dim( ⊕ A ) = rank ( A ) = γ . The pseudo-inverse of A is denoted by A † ∈ R n × n ; the orthogonal projection, (cid:101) x i , of an arbitrary vector, x i ∈ R n , on the range space, ⊕ A ,is given by the matrix I A = AA † , i.e. (cid:101) x i = I A x i = AA † x i . (1)With this notation, (cid:101) x i ∈ ⊕ A ⊆ R n . Clearly, I A = AA † AA † = AA † = I A is a projection matrix from the propertiesof pseudo-inverse: AA † A = A and A † AA † = A † . Note that when x i ∈ ⊕ A , then I A x i = x i . The Singular Value Decomposition (SVD) of A is given by A = U A S A V (cid:62) A with U A U (cid:62) A = I n , V A V (cid:62) A = I n ,then A † = V S † A U (cid:62) , where S † A is the pseudo-inverse of the diagonal matrix of the singular values, S A (with † = 0 ).When A is full-rank, we have A † = A − , I A = I n . Since γ = rank ( A ) , the singular vectors ( U A , V A ) can bearranged such that I A = AA † = U A S A V (cid:62) A V A S † A U (cid:62) A = U A S A S † A U (cid:62) A , (2) = U A γ × γ I γ U (cid:62) A . (3) April 1, 2019 DRAFT
From the above, the projection matrix, I A , is symmetric with orthogonal eigenvectors (or left and right singularvectors), U A , such that its eigenvalues (singular values) are either ’s or ’s.For some W = { w ij } ∈ R N × N and some A = { a ij } ∈ R n × n with w ij , a ij ∈ R , the matrix Kronecker productis W ⊗ A = w A w A . . . w N A ... ... . . . ... w N A w N A . . . w NN A , (4)which lies in R nN × nN . It can be verified that I N ⊗ A is a block-diagonal matrix where each diagonal block is A with a total of N blocks. We have W ⊗ A = ( W ⊗ I n )( I N ⊗ A ) . The following properties are useful in the contextof this paper. ( W ⊗ I n ) ( I N ⊗ A ) = ( I N ⊗ A ) ( W ⊗ I n ) , (5) ( W ⊗ I n ) k = ( W k ⊗ I n ) , (6)for some non-negative integer, k . More details on these notions can be found in [31].III. P ROBLEM F ORMULATION
We consider average consensus in a multi-agent network when the inter-agent communication is subject tounwanted interference, i.e. the desired communication, x j ∈ R n , from agent j ∈ V to agent i ∈ V has an additiveterm, z ij ∈ R n , resulting into agent i receiving x j + z ij from agent j . We consider the case when this unwantedinterference is linear. In particular, every link, j → i or ( i, j ) ∈ E , incurs the following additive interference: z ij = (cid:88) m ∈V a mij Γ mij x m , (7)where: a mij = 1 , if agent m ∈ V interferes with j → i , and otherwise; and Γ mij ∈ R n × n is the interference gainwhen m ∈ V interferes with the j → i communication. What agent i actually receives from agent j is thus: x jk + (cid:88) m ∈V a mijk Γ mijk x mk , (8)at time k , where the subscript ‘ ijk ’ introduces the time dependency on the corresponding variables, see Fig. 1. i l j m m m Γ ij Γ li Γ ij Γ li m m m m I n t e rf e r e n ce C h a nn e l TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A x j x m x m x m x j + ¡ m ij x m + ¡ m ij x m ¡! i Fig. 1. Interference model: Note that agent j may also interfere with j → i communication, i.e. m or m can be j . This may happen whenagent j ’s transmission to agents other than agent i interfere with the j → i channel. April 1, 2019 DRAFT
Given the interference setup, average-consensus implemented on the multi-agent network is given by x ik +1 = (cid:88) j ∈N i w ij (cid:32) x jk + (cid:88) m ∈V a mijk Γ mijk x mk (cid:33) , (9)for k ≥ , i ∈ V , with x i ∈ R n . Interference is only incurred when w ij (cid:54) = 0 , which is true for each j ∈ N i , ingeneral. In other words, interference is incurred on all the links that are allowed by the underlying communicationgraph, G . The protocol in Eq. (9) reduces to the standard average-consensus [22], when there is no interference,i.e. when a mijk = 0 , for all i, j, k, m , and converges to x i ∞ (cid:44) lim k →∞ x ik = 1 N N (cid:88) j =1 x j . (10)However, when there is interference, i.e. a mijk (cid:54) = 0 , Eq. (9), in general, either goes to zero or diverges at all agents.The former is applicable when the effect of the interference results into a stable weight matrix, W = { w ij } , andthe latter is in effect when the interference forces the spectral radius of the weight matrix to be greater than unity.The primary reason is that if w ij ’s are chosen to sum to in each neighborhood (to ensure W (cid:62) = (cid:62) ), theireffective contribution in Eq. (11) is not because of the unwanted interference.This paper studies appropriate modifications to Eq. (9) in order to achieve average-consensus. The design inthis paper is based on a novel information alignment principle that ensures that the spectral radius of the mixingmatrix, W , is not displaced form unity. We assume the following:(a) No agent, i ∈ V , knows which (or how many) agents are interfering with its incoming or outgoing communi-cation. (b) The interference structure, a mijk and Γ mijk , are constant over time, k . This assumption is to keep the exposition simple and is made without loss of generality as we will elaboratelater.Under these assumptions, the standard average-consensus protocol is given by x ik +1 = (cid:88) j ∈N i w ij x jk + (cid:88) j ∈N i w ij (cid:88) m ∈V a mij Γ mij x mk , (11)for k ≥ , x i ∈ R n . The goal of this paper is to consider distributed averaging operations in the presence ofinterference not only to establish the convergence, but further to ensure that the convergence is towards a meaningfulquantity. To these aims, we present a conservative solution to this problem in Section IV, which is further improvedin Sections V and VI for some practically relevant scenarios.IV. A C ONSERVATIVE A PPROACH
Before considering the general case within a conservative paradigm, we explore a special case of uniforminterference in Sections IV-A and IV-B. We then provide the generalization in Section IV-C and shed light onthe conservative solution. See [22] for relevant conditions for convergence: W n = n , (cid:62) n W = (cid:62) n , G is strongly-connected, and w ij (cid:54) = 0 for each ( i, j ) ∈ E . April 1, 2019 DRAFT
A. Uniform Interference
Uniform interference is when each communication link in the network experiences the same interference gain,i.e. Γ mij = Γ , ∀ i, j, m . In other words, all of the blocks in the interference channel of Fig. 1 represent the sameinterference gain matrix, Γ ∈ R n × n . In this context, Eq. (11) is given by x ik +1 = (cid:88) j ∈N i w ij x jk + (cid:88) m ∈V b mi Γ x mk , (12)where b mi = (cid:80) j ∈N i w ij a mij . Here, b mi (cid:54) = 0 means that agent m ∈ V interferes with agent i ∈ V over some of themessages (from j ∈ N i ) received by agent i . In fact, an agent m ∈ V may interfere with agent i ’s reception onmultiple incoming links, while an interferer, m , may also belong to N i , i.e. the neighbors of agent i . To proceedwith the analysis, we first write Eq. (11) in its matrix form: Let B be an N × N matrix whose ‘ im ’th element isgiven by b mi . Define the network state at time k : x k = (cid:104) (cid:0) x k (cid:1) (cid:62) (cid:0) x k (cid:1) (cid:62) . . . (cid:0) x Nk (cid:1) (cid:62) (cid:105) (cid:62) . (13)Then, it can be verified that Eq. (12) is compactly written as x k +1 = ( W ⊗ I n + B ⊗ Γ ) x k . (14)The N × N weight matrix, W , has the sparsity pattern of the consensus graph, G , while the N × N matrix, B ,has the sparsity pattern of what can be referred to as the interference graph –induced by the interferers. We havethe following result. Lemma 1. If Γ x i = n , ∀ i , then Γ x ik = n , ∀ i, k. Proof:
Note that Γ x ik is a local operation at the i th agent. This is equivalent to multiplying I N ⊗ Γ with thenetwork vector, x k . From the lemma’s statement, we have ( I N ⊗ Γ ) x = nN . Now note that (recall Section II) ( I N ⊗ Γ ) ( W ⊗ I n + B ⊗ Γ ) = (cid:0) W ⊗ Γ + B ⊗ Γ (cid:1) , = ( W ⊗ I n + B ⊗ Γ ) ( I N ⊗ Γ ) . Subsequently, multiply both sides of Eq. (14) by ( I N ⊗ Γ ) : ( I N ⊗ Γ ) x k +1 = ( W ⊗ I n + B ⊗ Γ ) ( I N ⊗ Γ ) x k , = ( W ⊗ I n + B ⊗ Γ ) k +1 ( I N ⊗ Γ ) x = n , and the lemma follows.The above lemma shows that the effect of uniform interference can be removed from the average-consensusprotocol if the data (initial conditions) lies in the null space of the interference, Γ . To proceed, let us denotethe interference null space (of Γ ) by Θ Γ . Recall that ⊕ i x i denotes the subspace spanned by all of the initialconditions, the applicability of Lemma 1 is not straightforward because: (i) dim( ⊕ i x i ) > dim(Θ Γ ) , in general;and, (ii) even when dim( ⊕ i x i ) ≤ dim(Θ Γ ) , the data subspace, ⊕ i x i , may not belong to the null space of the April 1, 2019 DRAFT interference, Θ Γ . However, intuitively, a scheme can be conceived as follows: Project the data on a low-dimensionalsubspace, S , such that dim( S ) ≤ dim(Θ Γ ) ; and, Align this projected subspace, S , on the null-space, Θ Γ , of theinterference. At this point, we must ensure that this alignment is reversible so that its effect can be undone in orderto recover the projected data subspace, S . To this aim, we provide the following lemma. Lemma 2.
For some ≤ γ ≤ n , let Γ ∈ R n × n have rank γ = n − γ , and let another matrix, I S ∈ R n × n haverank γ . There exists a full-rank preconditioning, T ∈ R n × n , such that Γ T I S = n × n .Proof: Since Γ has rank γ , there exists a singular value decomposition, Γ = U S V (cid:62) , where the n × n diagonal matrix S is such that its first γ elements are the singular values of Γ , and the remaining γ elements arezeros. With this structure on S , the matrix V can be partitioned into V = (cid:104) V V (cid:105) , (15)(with V ∈ R n × γ and V ∈ R n × γ ), where ⊕ V is the null-space of Γ . Similarly, I S = U S S S V (cid:62)S with rank γ ,where the matrices, U S and V S , are arranged such that the first γ diagonals of S S are zeros and the remaining arethe γ singular values of I S . Define T = (cid:104) V (cid:48) V (cid:48) (cid:105) U (cid:62)S , (16)where V (cid:48) is such that ⊕ V (cid:48) = ⊕ V , and V (cid:48) is chosen arbitrarily such that T is invertile. With this construction,note that V (cid:62) V (cid:48) is a zero matrix because V is orthogonal to the column-span of V (by the definition of theSVD). We have Γ T I S = U S V (cid:62) V (cid:48) γ × γ V (cid:62) V (cid:48) V (cid:62) V (cid:48) S S V (cid:62)S = U n × n V (cid:62)S , and the lemma follows.The above lemma shows that the computation of the preconditioning only requires the knowledge of the (uniform)interference null-space, Θ Γ (cid:44) ⊕ V . Clearly, T = V U (cid:62)S is a valid preconditioning as with this Γ T I S is a zeromatrix, but this choice is more restrictive and not necessary. Information alignment : Lemma 2 further sheds light on the notion of information alignment , i.e. the desiredinformation sent by the transmitter can be projected and aligned in such a way that it is not distorted by theinterference. Not only that the information remains unharmed, it can be recovered at the receiver as the precondi-tioning T , is invertible. The following theorem precisely establishes the notion of information alignment with thehelp of Lemmas 1 and 2. Theorem 1 (Uniform Interference) . Let Θ Γ denote the null space of Γ and let γ = dim(Θ Γ ) . In the presenceof uniform interference, the protocol in Eq. (14) recovers the average in a γ -dimensional subspace, S , of R n , viaan information alignment procedure based on the preconditioning.Proof: Without loss of generality, we assume that S = ⊕ A , where ⊕ A denotes the range space (columnspan) of some matrix, A ∈ R n × n , such that dim( ⊕ A ) = γ . Define I S = A † A , where I S is the orthogonal April 1, 2019 DRAFT projection that projects any arbitrary vector in R n on S . Define the projected (on S ) and transformed initialconditions: (cid:98) x i (cid:44) T I S x i , ∀ i ∈ V , where T is the invertible preconditioning given in Lemma 2. From Lemma 2,we have Γ (cid:98) x i = Γ T I S x i = n , ∀ i ∈ V , (17)i.e. the alignment makes the initial conditions invisible to the interference. From Lemma 1, Eq. (14) reduces to (cid:98) x ik +1 = (cid:80) j ∈N i w ij (cid:98) x jk , when the initial conditions are (cid:98) x i , ∀ i ∈ V , which converges to the average of the transformedand projected initial conditions, (cid:98) x i ’s, under the standard average-consensus conditions on G and W . Finally, averagein S is recovered by (cid:101) x i ∞ = T − (cid:98) x i ∞ = 1 N N (cid:88) j =1 T − (cid:98) x j = 1 N N (cid:88) j =1 I S x j , ∀ i ∈ V , and the theorem follows.The above theorem shows that in the presence of uniform interference, a careful information alignment resultsinto obtaining the data (initial conditions) average projected onto any arbitrary γ -dimensional subspace, S , of R n .We note that a completely distributed application of Theorem 1 requires that each agent knows the null-space, Θ Γ ,of the (uniform) interference, recall Lemma 2; and thus is completely local. In addition, all of the agents are requiredto agree on the desired signal subspace, S , where the data is to be projected. B. Illustration of Theorem 1
In essence, Theorem 1 can be summarized in the following steps, illustrated with the help of Fig. 2:(i)
Project the data, R n , on a γ -dimensional subspace, S , via the projection matrix, I S .In Fig. 2 (a), the data (initial conditions) lies arbitrarily in R projected on a γ = 2 -dimensional subspace, S ,in Fig. 2 (b). Interference is given by a rank matrix, Γ ; the interference subspace is shown by the blackline;(ii) Align the projected subspace, S , on the null space, Θ Γ , of interference, Γ , via the preconditioning, T .In Fig. 2 (c), the projected subspace, S , is aligned to the null of space, Θ Γ , of the interference via precon-ditioning with T . Note that after the alignment, the data is orthogonal to the interference subspace (blackline);(iii) Consensus is implemented now on the null space of the interference, see Fig. 2 (d).(iv)
Recover the average in S via T − .Finally, the average in the null space, Θ Γ , is translated back to the the signal subspace, S , via T − . We alsoshow the true average in R by the ‘ (cid:63) ’, see Fig. 2 (e).From Theorem 1, when Γ is full-rank, i.e. γ = 0 , the iterations converge to a zero-dimensional subspace andare not meaningful. However, if the interference is low-rank, consensus under uniform interference may still remainmeaningful. In fact, we can establish the following immediate corollaries. April 1, 2019 DRAFT (a) (b) (c) (d) (e)
Fig. 2. Consensus under uniform interference: (a) Signal space, R , data shown as squares and the average as ‘ (cid:63) ’; (b) Projected signalsubspace, S , shown as circles and the average as ‘ (cid:5) ’; (c) Alignment on the null space of the interference, T I S x i ; (d) Consensus in the nullspace of the interference, (cid:98) x ik , average shown as large filled circle; and, (e) Translation back to the signal subspace, T − (cid:98) x i ∞ . Corollary 1 (Perfect Consensus) . Let x i ∈ R n be such that dim( ⊕ i x i ) ≤ dim(Θ Γ ) . Then consensus underuniform interference, Eq. (14) , recovers the true average of the initial conditions, x i . Corollary 2 (Principal/Selective Consensus) . Let the initial conditions, x i , belong to the range space, ⊕ A , of somematrix, A ∈ R n × n . Then consensus under uniform interference, Eq. (14) , recovers the average in a γ = dim(Θ Γ ) subspace that can be chosen along any γ singular values of A . The proofs of the above two corollaries immediately follow from Theorem 1. In fact, the protocol, Eq. (14), canbe tailored towards the γ largest singular values (principal consensus), or towards any arbitrary γ singular values(selective consensus). The former is applicable to the cases when the data (initial conditions) lies primarily alonga few singular values. While the latter is applicable to the cases when the initial conditions are known to havemeaningful components in some singular values. We now show a few examples on this approach. Example 1.
Consider the initial conditions, x i , ∀ i , to lie in the range space, ⊕ A , with the following: A = , I S = 12
12 1212 12 , U S = − √ − √ √ − √ . (18) Clearly, dim( ⊕ A ) = 1 . Consider any rank interference, Γ : Γ = α , Θ Γ = β − , α, β ∈ R . It can be easily verified that originally the data subspace, ⊕ A , is aligned with the interference subspace, ⊕ Γ ,and standard consensus operation is not applicable as no agent knows from which agents and on what links thisinterference is being incurred (recall Assumption (a) in Section III). In other words, each agent i , implementingEq. (9) , cannot ensure that (cid:80) j ∈N i w ij + (cid:80) j ∈N i w ij (cid:80) m ∈V a mij = 1 for the above iterations to remain meaningfuland convergent.Following Theorem 1, we choose T = V U (cid:62)S , which can be verified to be a diagonal matrix with and − onthe diagonal, resulting into Γ T I S = × . The effect of preconditioning, T , is to move the entire -dimensional April 1, 2019 DRAFT0 signal subspace in the null space of the interference. Subsequently, (cid:98) x ik +1 = (cid:88) j ∈N i w ij (cid:98) x jk + (cid:88) m ∈V b mi Γ (cid:98) x mk = (cid:88) j ∈N i w ij (cid:98) x jk + n , when (cid:98) x i = T I S x i = T x i , and true average is recovered via T − (see Corollary 1).C. A Conservative Generalization In Section IV-A, we assume that the overall interference structure, recall Fig. 1, is such that the interferencegains are uniform, i.e. Γ mij = Γ . We now provide a conservative generalization of Theorem 1 to the case when theinterferences do not have a uniform structure.
Theorem 2.
Define Γ ∈ R n × n to be the network interference matrix such that ⊕ i,j,m Γ mij ⊆ ⊕ Γ , i, j, m, ∈ V . (19) Let Θ Γ be the null space of Γ with γ = dim(Θ Γ ) . The protocol in Eq. (11) recovers the average in a γ -dimensionalsubspace, S , of R n , with an appropriate alignment. The proof follows directly from Lemmas 1, 2, and Theorem 1. Following the earlier discussion, we choosea global preconditioning, T ∈ R n × n , based on the null-space, Θ Γ , of the network interference, Γ . The solutiondescribed by Theorem 2 requires each interference to belong to some subspace of the network interference, ⊕ Γ ,and each agent to have the knowledge of this network interference. However, this global knowledge is not why theapproach in Theorem 2 is conservative , as we explain below.Consider ⊕ i,j, Γ mij ⊆ R n , to be such that dim (cid:0) ⊕ Γ mij (cid:1) = 1 , for each i, j, m ∈ V . In other words, each interferenceblock in Fig. 1 is a one-dimensional line in R n . Theorem 2 assumes a network interference matrix, Γ , such that itsrange space, ⊕ Γ , includes every local interference subspace, ⊕ Γ mij . When each local interference subspace, ⊕ Γ mij ,is one-dimensional, we can easily have dim( ⊕ i,j,m Γ mij ) = n , subsequently requiring dim( ⊕ Γ) = n . This happenswhen the local interference subspaces are not aligned perfectly. Theorem 1 is a very special scenario when all of thelocal interference subspaces are exactly the same (perfectly aligned). Extending it to Theorem 2, however, showsthat when the local interference are misaligned, ⊕ Γ may have dimension n , and consensus is only ensured on azero-dimensional subspace, i.e. with I S = n × n .This limitation of Theorem 2 invokes a significant question: When all of the local interferences are misalignedsuch that their collection spans the entire R n , can consensus recover anything meaningful? Is it true that Theorem 2is the only candidate solution? In the next sections, we show that there are indeed distributed and local protocolsthat can recover meaningful information. To proceed, we add another assumption, (c), to Assumptions (a) and (b)in Section III:(c)
The interference matrices, Γ mij , are independent over j .Note that in our interference model, any agent m ∈ V can interfere with j → i communication; from Assumption(a), these are unknown to either agent j or i . Assumption (c) is equivalent to saying that this interference is onlya function of the interferer, m ∈ V , or the receiver, i ∈ V , and is independent of communication link, j → i . April 1, 2019 DRAFT1
We consider the design and analysis in the following cases:
Uniform Outgoing Interference : Γ mi = Γ m , ∀ i, m ∈ V . In this case, each agent, m ∈ V , interferes with everyother agent via the same interference matrix, Γ m , see Fig. 3 (top). This case is discussed in Section V; Uniform Incoming Interference : Γ mi = Γ i , ∀ i, m ∈ V . In this case, each agent i incurs the same interference, Γ i ,over all the interferers, m ∈ V , see Fig. 3 (bottom). This case is discussed in Section VI. m i l j m m Γ Γ Γ I n t e rf e r e n ce C h a nn e l m m m T m T m T m T j T j i l j m Γ i Γ l I n t e rf e r e n ce C h a nn e l m m R i R l Fig. 3. (Top) Uniform Outgoing (Bottom) Uniform Incoming. The blocks, T i ’s and R i ’s, will become clear from Sections V and VI. V. U
NIFORM O UTGOING I NTERFERENCE
This section presents results for the uniform outgoing interference, i.e. each agent, m ∈ V , interferes with everyother agent in the same way. Recall that agent j wishes to transmit x j to agent i in the presence of interference.When this interference depends only on the interfere, agent i receives x jk + (cid:88) m ∈V a mij Γ m x mk , (20)from agent j at time k . We modify the transmission as T m (cid:101) x mk , for all m ∈ V for some auxiliary state variable, (cid:101) x ik ∈ R n , to be explicitly defined shortly; agent i thus receives T j (cid:101) x jk + (cid:88) m ∈V a mij Γ m T m (cid:101) x mk , (21)from agent j at time k . Consider the following protocol: (cid:101) x ik +1 = (cid:88) j ∈N i W ij (cid:32) T j (cid:101) x jk + (cid:88) m ∈V a mij Γ m T m (cid:101) x mk (cid:33) , (22)where W ij ∈ R n × n is now a matrix that agent i associates with agent j ; recall that earlier W ij = w ij I n . We get (cid:101) x ik +1 = (cid:88) j ∈N i W ij T j (cid:101) x jk + (cid:88) m ∈V B im Γ m T m (cid:101) x mk , (23)where B im = (cid:80) j ∈N i W ij a mij . We have the following result. Lemma 3.
For some non-negative integer, γ ≤ n , let each outgoing interference matrix, Γ i , have rank γ (cid:44) n − γ .Let I S ∈ R n × n be the projection matrix that projects R n on S , where dim( S ) = γ . Then, there exist T i at April 1, 2019 DRAFT2 each i ∈ V , and W ij ’s for all ( i, j ) ∈ E such that Eq. (23) becomes (cid:101) x ik +1 = (cid:88) j ∈N i w ij (cid:101) x jk , at each i ∈ V , when (cid:101) x i ∈ S .Proof: Without loss of generality, we assume that S = ⊕ A , where ⊕ A denotes the range space of somematrix, A ∈ R n × n , such that dim( ⊕ A ) = γ . Define I S = A † A , where I S is the orthogonal projection that projectsany arbitrary vector in R n on S . Define (cid:101) x i to be the projected initial conditions, i.e. (cid:101) x i (cid:44) I S x i . Let T i be the locallydesigned , invertible preconditioning, obtained at each i ∈ V from the null-space, Θ Γ i , of its outgoing interferencematrix, Γ i , see Lemma 2. Clearly, following Lemma 2, we have Γ i T i (cid:101) x i = n , ∀ i ∈ V . Choose W ij = w ij T − j . (24)From Eq. (23), we have (cid:101) x ik +1 = (cid:88) j ∈N i w ij (cid:101) x jk + (cid:88) m ∈V B im Γ m T m (cid:101) x mk . We claim that when (cid:101) x i ∈ S , ∀ i ∈ V , then (cid:101) x ik ∈ S , ∀ i ∈ V , k , proven below by induction. Consider k = 0 , then (cid:101) x i = (cid:88) j ∈N i w ij (cid:101) x j + (cid:88) m ∈V B im Γ m T m (cid:101) x m = (cid:88) j ∈N i w ij (cid:101) x j , which is a linear combination of vectors in S and thus lies in S . Assume that (cid:101) x ik ∈ S , ∀ i ∈ V , and some k , leadingto Γ i T i (cid:101) x ik = n . Then for k + 1 : (cid:101) x ik +1 = (cid:88) j ∈N i w ij (cid:101) x jk + (cid:88) m ∈V B im Γ m T m (cid:101) x mk = (cid:88) j ∈N i w ij (cid:101) x jk , which is a linear combination of vectors in S .The main result on uniform outgoing interference is as follows. Theorem 3.
Let Θ Γ i denote the null space of Γ i , and let γ (cid:44) min i ∈V { dim(Θ Γ i ) } . In the presence of uniformoutgoing interference, Eq. (22) recovers the average in a γ -dimensional subspace, S , of R n , when we choose T i according to Lemma 2, and W ij = w ij T − j , at each i, j ∈ N i . The proof follows from Lemma 3. In other words, the consensus protocol in the presence of uniform outgoinginterference, Eq. (22), converges to (cid:101) x i ∞ = 1 N N (cid:88) j =1 (cid:101) x j = 1 N N (cid:88) j =1 I S x j , (25)for any x i ∈ R n , ∀ i ∈ V . We note that each agent, i ∈ V , is only required to know the null-space of its outgoinginterference, Γ i , to construct an appropriate preconditioning, T i . In addition, each agent, i ∈ V , is required to obtainthe local pre-conditioners, T j ’s, only from its neighbors, j ∈ N i ; and thus, this step is also completely local. April 1, 2019 DRAFT3 (a) (b) (c) (d)Fig. 4. Consensus under uniform outgoing interference: (a) Signal space,
S ⊆ R , where dim( S ) = 2 ; (b) One-dimensional range spaces, ⊕ Γ i ,of Γ i ’s–the null spaces of each are γ = 2 -dimensional, shown as planes; (c) Agent transmissions aligned in the corresponding null spaces overtime, k ; (d) Consensus in the signal subspace, S , after appropriate translations, at each i ∈ V , back to the signal subspace by T − j , with j ∈ N i . The protocol described in Theorem 3 can be cast in the purview of Fig. 3 (top). Notice that a transmissionfrom any agent, i ∈ V , passes through agent i ’s dedicated preconditioning matrix, T i . The network (both non-interference and interference) sees only T i x ik at each k . Since the interference is a function of the transmitter(uniform outgoing), all of the agents ensure that a particular signal subspace, S , is not corrupted by the interferencechannel. The significance here is that even when the interferences are misaligned such that ⊕ i ∈V Γ i = R n , theprotocol in Eq. (22) recovers the average in γ = min i ∈V { Θ Γ i } dimensional signal subspace. On the other hand, thenull space of the entire collection, ⊕ i ∈V Γ i , may very well be -dimensional. For example, if each Γ i is rank suchthat each of the corresponding one-dimensional subspace is misaligned, Eq. (22) recovers the average in an n − dimensional signal subspace. On the other hand, Theorem 2 does not recover anything other than n . A. Illustration of Theorem 3
Let the initial conditions belong to a -dimensional subspace in R and consider N = 10 agents, with randominitial conditions, shown as blue squares in Fig. 4 (a). Uniform outgoing interference is chosen as one of the three -dimensional subspaces such that each interference appears at some agent in the network, see Fig. 4 (b). Clearly,each interference is misaligned and dim( ⊕ i Γ i ) = n = 3 . Hence, the protocol following Theorem 2 requires thesignal subspace to be n − dim( ⊕ i Γ i ) = 0 dimensional. However, when the agent transmissions are preconditionedusing T i ’s, each agent projects its transmission on the null space of its interference. Each receiver, i ∈ V , receivesa misaligned data, T j x j , from each of its neighbors, j ∈ N i , see Fig. 4 (c). Since each T j x j is a function of thecorresponding neighbor, j , the data can be translated back to S via T − j , which is incorporated in the consensusweights, W ij = w ij T − j . VI. U NIFORM I NCOMING I NTERFERENCE
In this section, we consider the case of uniform incoming interference, i.e. each agent i ∈ V incurs the sameinterference, Γ i , over all of the interferers, m ∈ V . This scenario is shown in Fig. 3 (bottom). We note that April 1, 2019 DRAFT4
Theorem 2 is applicable here but results into a conservative approach as elaborated earlier. We note that this caseis completely different from the uniform outgoing case (of the previous section), since preconditioning (alone) maynot work as we explain below.When an agent, m ∈ V , employs preconditioning, it may not precondition to account for the interference, Γ i ,experienced at each receiver, i , with which m may interfere. In the purview of Fig. 3 (bottom), if agent m ∈ V preconditions using T m to cancel the interference, Γ i , experienced by agent i ; the same preconditioning, T m , isnot helpful to agent l . For example, let agent m choose T m = V i U (cid:62)S (a valid choice following Lemma 2), thenas discussed earlier Γ i V i U (cid:62)S I S = n × n and m ’s interference is not seen by agent i . However, this preconditioningappears as Γ l V i U (cid:62)S I S at agent l , which is n × n only when V (cid:62) l V i = I n . This is not true in general.We now explicitly address the uniform incoming interference scenario. In this case, Eq. (11) takes the followingform: x ik +1 = (cid:88) j ∈N i W ij (cid:32) x jk + Γ i (cid:88) m ∈V a mij x mk (cid:33) , (26) k ≥ , x i ∈ R n and where, as in Section V, we use a matrix, W ij ∈ R n × n to retain some design flexibility. Theonly possible way to cancel the unwanted interference now is via what can be referred to as post-conditioning .Each agent, i ∈ V , chooses a post-conditioner, R i ∈ R n × n . As before, we assume I S = U S S S V (cid:62)S to be theprojection matrix for some subspace, S ⊆ R n , and modify the transmission as S S (cid:98) x mk , for some auxiliary statevariable, (cid:98) x ik ∈ R n , to be explicitly defined shortly. The modified protocol is (cid:98) x ik +1 = (cid:88) j ∈N i W ij R i (cid:32) S S (cid:98) x jk + Γ i (cid:88) m ∈V a mij S S (cid:98) x mk (cid:33) . (27)The goal is to design an R i such that R i Γ i = n × n . Following the earlier approaches, we assume that rank (Γ i ) = γ, ∀ i ∈ V , and rank ( I S ) = γ , such that γ + γ = n , with SVDs, Γ i = U i S i V (cid:62) i and I S = U S S S V (cid:62)S , where thesingular value matrices are arranged as S i = S γi γ × γ , S S = γ × γ I γ . (28)The next lemma characterizes the post-conditioner, R i . Lemma 4.
Let Γ i = U i S i V (cid:62) i and S S have the structure of Eq. (28) . Given the null-space of Γ (cid:62) i , there exists arank γ post-conditioner, R i , such that R i Γ i = n × n .Proof: We assume that U i is partitioned as (cid:104) U i | U i (cid:105) , where U i ∈ R n × γ and U i ∈ R n × γ . Clearly, U i isthe null-space of Γ (cid:62) i . Define R i = S S (cid:104) U (cid:48) i | U (cid:48) i (cid:105) (cid:62) , (29)where U (cid:48) i is such that ⊕ U (cid:48) i = ⊕ U i , and U (cid:48) i is arbitrary. By definition, we have U (cid:62) i U i = γ × γ ; hence, by April 1, 2019 DRAFT5 construction, U (cid:48) i (cid:62) U i = γ × γ . It can be verified that the post-conditioning results into R i Γ i = I γ U (cid:48) i (cid:62) U i S γi V (cid:62) i , and the lemma follows. Note that R i = S S U (cid:62) i is a valid choice but it is not necessary.With the help of Lemma 4, Eq. (27) is now given by (cid:98) x ik +1 = (cid:88) j ∈N i W ij S S (cid:104) U (cid:48) i | U (cid:48) i (cid:105) (cid:62) S S (cid:98) x jk . (30)Recall that U (cid:48) i is an n × γ matrix whose column-span is the same as the column-span of U i , and the column-spanof U i is the null-space of Γ (cid:62) i . We now denote the lower γ × γ sub-matrix of U (cid:48) i by (cid:98) U i . In order to simply theabove iterations, we note that S S (cid:104) U (cid:48) i | U (cid:48) i (cid:105) (cid:62) S S = γ × γ (cid:98) U (cid:62) i , (31)and dim( U (cid:48) i ) = dim( U i ) = n − γ = γ . It is straightforward to show that (cid:98) U (cid:62) i is always invertible. Based on thisdiscussion, the following lemma establishes the convergence of Eq. (27). Lemma 5.
Let Γ i = U i S i V (cid:62) i , ∀ i ∈ V , and some projection matrix, I S = U S S S V (cid:62)S , have ranks γ , and γ (cid:44) n − γ ,respectively (0 ≤ γ ≤ n ) , such that S i and S S are arranged as in Eq. (28) . When R i is chosen according toLemma 4, and for each i ∈ V , W ij is chosen as W ij = w ij γ × γ (cid:16) (cid:98) U (cid:62) i (cid:17) − , (32) the protocol in Eq. (27) recovers the average of the last γ components of the initial conditions, (cid:98) x i .Proof: We note that under the given choice for R i ’s, the interference term is n , and Eq. (27) reduces toEq. (30). Now we use Eqs. (31) and (32) in Eq. (30) to obtain: (cid:98) x ik +1 = (cid:88) j ∈N i W ij S S U (cid:62) i S S (cid:98) x jk = (cid:88) j ∈N i w ij γ × γ I γ (cid:98) x jk , which in the limit as k → ∞ converges to (cid:98) x i ∞ = 1 N N (cid:88) j =1 γ × γ I γ (cid:98) x i , ∀ i ∈ V . (33)That (cid:98) U (cid:62) i is invertible is always true because it is a principal minor of an invertible matrix, U (cid:62) i .Following is the main result of this section. Theorem 4.
Let Γ i ’s, R i ’s, and W ij ’s, be chosen according to Lemma 5. The protocol in Eq. (27) under uniformincoming interference recovers the average in a γ -dimensional subspace, S , of R n . April 1, 2019 DRAFT6
Proof:
Without loss of generality, assume that S has a projection matrix, I S , with SVD as defined above.Let (cid:98) x i = V (cid:62)S x i and define (cid:101) x ik = U S (cid:98) x ik , ∀ i ∈ V . Then, from Lemma 5 (cid:101) x i ∞ = U S N N (cid:88) j =1 γ × γ I γ V (cid:62)S x i = 1 N N (cid:88) j =1 U S S S V (cid:62)S x i , ∀ i ∈ V , and the theorem follows.Some remarks are in order to explain the mechanics of Theorem 4. Let I S = U S S S V (cid:62)S with V S = (cid:104) V S | V S (cid:105) , and U S = (cid:104) U S | U S (cid:105) , where V S is the null space of I S .(i) When any agent i ∈ V receives S S (cid:98) x m as an interference, it is canceled via the post-conditioning by R i ,regardless of the transmission, S S (cid:98) x m : R i Γ i S S (cid:98) x m = S S S i V (cid:62) i S S (cid:98) x m = n , because of the structure in the S S and S i from Eq. (28).(ii) It is more interesting to observe the effect on the intended transmission, j → i , after the post-conditioningand multiplication with W ij . It is helpful to note that S S = S †S , and consider the transmission as S †S (cid:98) x j insteadof S S (cid:98) x j : W ij R i S †S (cid:98) x j = W ij S S U (cid:62) i (cid:124) (cid:123)(cid:122) (cid:125) Rx S †S (cid:98) x j (cid:124) (cid:123)(cid:122) (cid:125) Tx . The operation, S S U (cid:62) i , by the receiver, Rx, is vital to cancel the interference as shown in the previous step.However, this measure by the receiver also ‘distorts’ the intended transmission. What agent i receives is nowmultiplied by a low-rank matrix, S †S , in general. Consider for a moment that agent j were to send (cid:98) x j and agent i obtains S S U (cid:62) i (cid:98) x j , after the interference canceling operation. How can agent i choose an appropriate W ij to undothis post-conditioning? Such a procedure is not possible unless in trivial scenarios, e.g., when the interference wasa diagonal matrix and U i = I n . However, the transmitter may preemptively undo the distortion eventually incurredby the receiver’s interference canceling operation . This is precisely what is achieved by sending S †S (cid:101) x j .(iii) As we discussed, a preemptive measure, sending S †S (cid:101) x j , by the transmitter is vital so that the distortionbound to be added at the receiver is reversed. This reorientation, however, can be harmful, e.g., (cid:98) x j may onlycontain meaningful (non-zero) information in the first γ components and the multiplication by S S destroys thisinformation. To avoid this issue, we choose the initial condition at each agent as (cid:98) x i = V (cid:62)S x i ; the first transmissionat any agent i is thus: S S (cid:98) x i = S S V (cid:62)S x i = γ V (cid:62)S x i , which is to transform any arbitrary initial condition orthogonal to the null-space of the desired signal subspace, S .Since, the signal subspace, S , is γ -dimensional, retaining only the last γ components, after the transformationby V (cid:62)S , suffices. April 1, 2019 DRAFT7 (a) (b) (c) (d) (e)
Fig. 5. Uniform Incoming Interference: (a) Signal subspace,
S ⊆ R , with dim( S ) = 2 . The initial conditions are shown as blue squaresand the true average is shown as a white diamond; (b) One-dimensional interference null-spaces at each agent, i ∈ V ; (c) Auxiliary statevariables, (cid:101) x j = V (cid:62)S x i , shown as red circles; (d) Consensus iterates in the auxiliary states and the average in the auxiliary initial conditions;and, (e) Recovery via (cid:101) x ik = U S (cid:98) x ik . (iv) We choose W ij according to Eq. (32) and obtain (cid:98) x i = (cid:88) j ∈N i W ij R i S S (cid:98) x j = S S V (cid:62)S (cid:88) j ∈N i w ij x j = S S V (cid:62)S x i , ∀ i ∈ V , where x ik are the interference-free consensus iterates. Now lets look at (cid:98) x i , ignoring the interference termsas they are n , regardless of the transmission: (cid:98) x i = (cid:88) j ∈N i W ij R i S S S S V (cid:62)S x j = S S V (cid:62)S x i , by the same procedure that we followed to obtain (cid:98) x i . In fact, the process continues and we get (cid:98) x ik +1 = S S V (cid:62)S x ik +1 , or (cid:98) x i ∞ = S S V (cid:62)S x i ∞ , and the average in S , is obtained by (cid:101) x i ∞ = U S (cid:98) x i ∞ = U S S S V (cid:62)S x i ∞ = I S x i ∞ . A. Illustration of Theorem 4
We now provide a graphical illustration of Theorem 4. The network is comprised of N = 10 agents each with arandomly chosen initial condition on a -dimensional subspace, S , of R , shown in Fig. 5 (a). Incoming interferenceis chosen randomly as a one-dimensional subspace at each agent, shown as grey lines in Fig. 5 (b). It can be easilyverified that the span of all of the interferences, ⊕ i ∈V Γ i , is the entire R . The initial conditions are now transformedwith V (cid:62)S so that the transmission, S S (cid:98) x ik , does not destroy the signal subspace, S . This transformation is shownin Fig. 5 (c). Consensus iterations are implemented in this transformed subspace, (cid:98) x ik , Fig. 5 (d), and finally, theiterations, (cid:101) x ik , in the signal subspace, S , are obtained via a post-multiplication by U S .VII. D ISCUSSION
We now recapitulate the development in this paper.
Assumptions:
The exposition is based on three assumptions, (a) and (b) in Section III, and (c) in Section IV-C.Assumption (a), in general, ensures that the setup remains practically relevant, and further makes the averagingproblem non-trivial. Assumption (b) is primarily for the sake of simplicity; the strategies described in this paperare applicable to the time-varying case. What is required is that when any incoming (or outgoing) interference
April 1, 2019 DRAFT8 subspace changes with time, this change is known to the interferer (or the receiver) so that appropriate pre- (orpost-) conditioning is implemented. Finally, Assumption (c) is noted to cast a concrete structure on the proposedinterference modeling. In fact, one can easily frame the incoming or outgoing interference as a special case of thegeneral framework. However, explicitly noting it establishes a clear distinction among the different structures.
Conservative Paradigm:
We consider a special case when each of the interference block in the network, seeFig. 1, is identical. This approach, rather restrictive, sheds light on the information alignment notion that keepsrecurring throughout the development, i.e. hide the information in the null space of the interference. When thelocal interferences, Γ mij , are not identical, we provide a conservative solution that utilizes an interference ‘blanket’(that covers each local interference subspace) to implement the information alignment. However, as we discussed,this interference blanket soon loses relevance as it may be n -dimensional to provide an appropriate cover. Whenthis is true, the only reliable data hiding is via a zero-dimensional hole (origin) and no meaningful information istransmitted. This conservative approach is improved in the cases of uniform outgoing and incoming interferencemodels. Uniform Outgoing Interference:
The fundamental concept in the uniform outgoing setting is to hide the desiredsignal in the null-space of the interferences, Γ m ’s. This alignment is possible at each transmitter as the eventualinterference is only a function of the transmitter. Uniform Incoming Interference:
The basic idea here is to hide the desired signal in the null-space of thetranspose of incoming interferences, Γ (cid:62) i ’s. This alignment is possible at each receiver as the eventual interferenceis only a function of the receiver. It can be easily verified that the resulting procedure is non-trivial. Null-spaces : Incoming and outgoing interference comprise the two major results in this paper. It is noteworthythat both of these results only assume the knowledge of the corresponding interference null-spaces; the basis vectorsof these null spaces can be arbitrary while the knowledge of the interference singular values is also not required.It is noteworthy that in a time-varying scenario where the basis vectors of the corresponding null-spaces changesuch that their span remains the same, no time adjustment is required.
Uniform Link Interference : One may also consider the case when Γ mij = Γ ij , see Eq. (11), i.e., each interferencegain is only a function of the communication link, j → i . Subsequently, when each receiving agent, i ∈ V , knowsthe null space of Γ (cid:62) ij , a protocol similar to the uniform incoming interference can be developed. Performance : To characterize the steady-state error, denoted by e i ∞ at an agent i , define e i ∞ = x i ∞ − I S x i ∞ ,where x i ∞ is the true average, Eq. (10). Clearly, (cid:0) I S x i ∞ (cid:1) (cid:62) e i ∞ = ( x i ∞ ) (cid:62) I (cid:62)S ( I n − I S ) x i ∞ = 0 , ∀ i ∈ V , i.e. the error is orthogonal to the estimate, or the average obtained is the best estimate in S ⊆ R n of the perfectaverage. VIII. C ONCLUSIONS
In this paper, we consider three particular cases of a general interference structure over a network performingdistributed (vector) average-consensus. First, we consider the case of uniform interference when the interference
April 1, 2019 DRAFT9 subspace is uniform across all agents. Second, we consider the case when this interference subspace depends onlyon the interferer (transmitter), referred to as uniform outgoing interference . Third, we consider the case when theinterference subspace depends only on the receiver, referred to as uniform incoming interference . For all of thesecases, we show that when the nodes are aware of the complementary subspaces (null spaces) of the correspondinginterference, consensus is possible in a low-dimensional subspace whose dimension is complimentary to the largestinterference subspace (across all of the agents). For all of these cases, we derive a completely local informationalignment strategy, followed by local consensus iterations to ensure perfect subspace consensus. We further providethe conditions under which this subspace consensus recovers the exact average. The analytical results are illustratedgraphically to describe the setup and the information alignment scheme.R
EFERENCES[1] K Sekihara and S S Nagarajan,
Adaptive spatial filters for electromagnetic brain imaging , Chapter 7: Effects of low-rank interference.2008.[2] D Guti´errez, Arye Nehorai, and A Dogandzic,
MEG source estimation in the presence of low-rank interference using cross-spectralmetrics , vol. 1, IEEE, 2004.[3] K Sekihara, S S Nagarajan, D Poeppel, and A Marantz, “Performance of an MEG adaptive-beamformer source reconstruction techniquein the presence of additive low-rank interference,”
Biomedical Engineering, IEEE Transactions on , vol. 51, no. 1, pp. 90–99, 2004.[4] M McCloud and L L Scharf, “Interference identification for detection with application to adaptive beamforming,”
Conference Record ofThirty-Second Asilomar Conference on Signals, Systems and Computers , vol. 2, pp. 1438–1442 vol.2, 1998.[5] A Dogandzic, “Minimum variance beamforming in low-rank interference,” in
Signals, Systems and Computers, 2002. Conference Recordof the Thirty-Sixth Asilomar Conference on . 2002, pp. 1293–1297, IEEE.[6] R Lupas and S Verdu, “Linear multiuser detectors for synchronous code-division multiple-access channels,”
Information Theory, IEEETransactions on , vol. 35, no. 1, pp. 123–136, Jan. 1989.[7] M K Varanasi and A Russ, “Noncoherent decorrelative detection for nonorthogonal multipulse modulation over the multiuser Gaussianchannel,”
Communications, IEEE Transactions on , vol. 46, no. 12, pp. 1675–1684, Dec. 1998.[8] L L Scharf and Benjamin Friedlander, “Matched subspace detectors,”
Signal Processing, IEEE Transactions on , vol. 42, no. 8, pp.2146–2157, Aug. 1994.[9] M McCloud and L L Scharf, “Generalized likelihood detection on multiple access channels,”
Conference Record of the Thirty-FirstAsilomar Conference on Signals, Systems and Computers (Cat. No.97CB36163) , vol. 2, pp. 1033–1037 vol.2, 1997.[10] J Scott Goldstein and Irving S Reed, “Reduced-rank adaptive filtering,”
Signal Processing, IEEE Transactions on , vol. 45, no. 2, pp.492–496, Feb. 1997.[11] Xiaodong Wang and H V Poor, “Blind multiuser detection: a subspace approach,”
Information Theory, IEEE Transactions on , vol. 44,no. 2, pp. 677–690, Mar. 1998.[12] A Dogandzic and Benhong Zhang, “Bayesian Complex Amplitude Estimation and Adaptive Matched Filter Detection in Low-RankInterference,”
Signal Processing, IEEE Transactions on , vol. 55, no. 3, pp. 1176–1182, 2007.[13] Fabian Monsees, Carsten Bockelmann, Mark Petermann, Armin Dekorsy, and Stefan Brueck, “On the Impact of Low-Rank Interferenceon the Post-Equalizer SINR in LTE,”
Communications, IEEE Transactions on , vol. 61, no. 5, pp. 1856–1867, 2013.[14] A. Jadbabaie, J. Lin, and A. S. Morse, “Coordination of groups of mobile autonomous agents using nearest neighbor rules,”
IEEETransactions on Automatic Control , vol. 48, no. 6, pp. 988–1001, Jun. 2003.[15] L. Xiao, S. Boyd, and S. Kim, “Distributed average consensus with least-mean-square deviation,”
Journal of Parallel and DistributedComputing , vol. 67, pp. 33–46, 2005.[16] A. K. Das and M. Mesbahi, “Distributed linear parameter estimation in sensor networks based on Laplacian dynamics consensus algorithm,”in , Reston, VA, Sep. 2006, vol. 2, pp. 440–449.
April 1, 2019 DRAFT0 [17] I. D. Schizas, A. Ribeiro, and G. B. Giannakis, “Consensus in ad hoc WSNs with noisy links - part I: Distributed estimation of deterministicsignals,”
IEEE Transactions on Signal Processing , vol. 56, no. 1, pp. 350–364, Jan. 2008.[18] U. A. Khan, S. Kar, and J. M. F. Moura, “Higher dimensional consensus: Learning in large-scale networks,”
IEEE Transactions on SignalProcessing , vol. 58, no. 5, pp. 2836–2849, May 2010.[19] R. Olfati-Saber, “Kalman-consensus filter : Optimality, stability, and performance,” in ,Shanghai, China, Dec. 2009, pp. 7036–7042.[20] C. G. Lopes and A. H. Sayed, “Diffusion least-mean squares over adaptive networks: Formulation and performance analysis,”
IEEETransactions on Signal Processing , vol. 56, no. 7, pp. 3122–3136, Jul. 2008.[21] S. Kar, J. Moura, and H. Poor, “Distributed linear parameter estimation: Asymptotically efficient adaptive strategies,”
SIAM Journal onControl and Optimization , vol. 51, no. 3, pp. 2200–2229, 2013.[22] L. Xiao and S. Boyd, “Fast linear iterations for distributed averaging,”
Systems and Controls Letters , vol. 53, no. 1, pp. 65–78, Apr. 2004.[23] M. G. Rabbat, R. D. Nowak, and J. A. Bucklew, “Generalized consensus computation in networked systems with erasure links,” in , New York, NY, 2005, pp. 1088–1092.[24] A. Kashyap, T. Basar, and R. Srikant, “Quantized consensus,”
Automatica , vol. 43, pp. 1192–1203, Jul. 2007.[25] T. C. Aysal, M. Coates, and M. Rabbat, “Distributed average consensus using probabilistic quantization,” in
IEEE 14th Workshop onStatistical Signal Processing , Maddison, WI, Aug. 2007, pp. 640–644.[26] S. Kar and J. M. F. Moura, “Distributed consensus algorithms in sensor networks with imperfect communication: Link failures and channelnoise,”
IEEE Transactions on Signal Processing , vol. 57, no. 1, pp. 355–369, 2009.[27] Y. Chen, R. Tron, A. Terzis, and R. Vidal, “Corrective consensus with asymmetric wireless links,” in , Orlando, FL, 2011, pp. 6660–6665.[28] T. C. Aysal and K. E. Barner, “Convergence of consensus models with stochastic disturbances,”
IEEE Transactions on Information Theory ,vol. 56, no. 8, pp. 4101–4113, 2010.[29] B. Nazer, A. G. Dimakis, and M. Gastpar, “Local interference can accelerate gossip algorithms,”
IEEE Journal of Selected Topics inSignal Processing , vol. 5, no. 4, pp. 876–887, 2011.[30] S. A. Jafar, “Interference Alignment: A New Look at Signal Dimensions in a Communication Network,”
Foundations and Trends inCommunications and Information Theory , vol. 7, no. 1, pp. 1–136, 2011.[31] R. A. Horn and C. R. Johnson,
Matrix Analysis , Cambridge University Press, New York, NY, 2013., Cambridge University Press, New York, NY, 2013.