On Stability and Convergence of Distributed Filters
Sayed Pouria Talebi, Stefan Werner, Vijay Gupta, Yih-Fang Huang
aa r X i v : . [ ee ss . SP ] F e b On Stability and Convergence of Distributed Filters
Sayed Pouria Talebi, Stefan Werner, Vijay Gupta, and Yih-Fang Huang
Abstract —Recent years have bore witness to the proliferationof distributed filtering techniques, where a collection of agentscommunicating over an ad-hoc network aim to collaborativelyestimate and track the state of a system. These techniques formthe enabling technology of modern multi-agent systems and havegained great importance in the engineering community. Althoughmost distributed filtering techniques come with a set of stabilityand convergence criteria, the conditions imposed are foundto be unnecessarily restrictive. The paradigm of stability andconvergence in distributed filtering is revised in this manuscript.Accordingly, a general distributed filter is constructed and itsestimation error dynamics is formulated. The conducted analysisdemonstrates that conditions for achieving stable filtering oper-ations are the same as those required in the centralized filteringsetting. Finally, the concepts are demonstrated in a Kalmanfiltering framework and validated using simulation examples.
Index Terms —Stability of linear systems, observers for linearsystems, Kalman filtering, sensor fusion, sensor networks.
I. I
NTRODUCTION
In recent years, large-scale multi-agent systems that caninteract over a network, such as sensor networks, have startedto emerge in a wide range of engineering applications in-cluding, power system monitoring [1,2], multi-vehicle coor-dination [3,4], as well as collaborative information processingand decision making [5]–[7]. The common thread amongmost of these applications is the requirement for a robustunderlying information processing and filtering framework.Although optimal filtering solutions are attainable through theuse of a centralized coordinator, these so-called centralizedapproaches are vulnerable to failure of the centralized coordi-nator and require complex communication protocols [8]–[10].Therefore, one of the most fundamental problems in this areahas become that of designing distributed filtering techniques.Distributed filtering techniques aim to enable agents of thenetwork to estimate and track the state of a system or learna set of parameters in a fashion that limits communicationto the neighbourhood of each agent and would not require acentralized coordinator.Seminal works in distributed filtering include [9,11]–[15],which have since created a vibrant research area (see, e.g., [5,16]–[24] and references therein). In general, these distributedfiltering techniques are accompanied with stability analysis.However, the assumptions imposed for guaranteeing stablebehaviour are, in the authors opinion, restrictive. For instant,
Sayed Pouria Talebi and Stefan Werner are with the Department ofElectronic Systems, Faculty of Information Technology and Electrical En-gineering, Norwegian University of Science and Technology, Trondheim NO-7491 Norway, E-mail: {pouria,stefan.werner}@ntnu.no.Vijay Gupta and Yih-Fang Huang are with the Department of ElectricalEngineering, University of Notre Dame, Notre Dame, IN 46556 USA, E-mails:{vgupta2,huang}@nd.edu.The work of Sayed Pouria Talebi and Stefan Werner was supported in partby Norwegian Research Council. distributed Kalman filtering techniques require the state vectorprocess to be fully observable to each agent via its observa-tion information only or in conjunction with the observationinformation shared from its neighbours [1,14,21]. More recentcontributions in this field achieve less strict observabilitycriteria, which nonetheless result in the introduction of otherrequirements that mainly manifest themself as conditions onhow information flows throughout the network [19,25,26]. De-spite exhibiting less complex behaviour, the made statementsalso extend to distributed filters based on gradient descent,e.g., the least mean square (LMS) algorithm [13,27].In what follows, a generalized distributed filtering for-mulation is constructed and its behaviour is analysed. Theobtained results establish that the stability criteria of theconsidered distributed filter are, in essence, the same asthat of its centralized dual. More importantly, due to thegeneralized setting of the constructed framework, the conceptcan be extended to a wide range of filtering applications.Finally, the introduced concepts are verified in a Kalman filtersetting using simulation examples, showing that in additionto modern techniques [10,15], traditional distributed Kalmanfilters [9,14,21] also converge under more relaxed conditionsthan initially reported.
Mathematical Notations : Scalars, column vectors, and ma-trices are denoted by lowercase, bold lowercase, and bolduppercase letters. Inequality symbols refer to generalizedmatrix inequalities. Remainder of the nomenclature is orga-nized as follows: column vector of appropriate size with unit entries I identity matrix of appropriate size ( · ) T transpose operator | · | cardinality operator E {·} statistical expectation operator p {·} spectral radius operator vec {·} vectorization operator diag {·} constructs a block-diagonal matrix from its entries col {·} constructs a block-column matrix from its entries ⊗ Kronecker product δ ( · ) Kronecker delta functionII. P
RELIMINARIES & B
ACKGROUND
A. Network Model
Following the conventional setting [11,14], the networkedagents are modelled as the connected graph G = {N , E} withthe node set N representing the agents and their bidirectionalcommunication links represented by the edge set E . Theneighbourhood of node l is the set of agents that agent l receives information from, which includes agent l itself, andis denoted by N l . Finally, the cardinality of a node set is thenumber of nodes that it contains. For example, |N l | denotesthe number of nodes in the neighbourhood of node l . B. Distributed Filtering Paradigm
Consider the state vector sequence { x n , n = 1 , , . . . } described through the linear dynamics x n +1 = Ax n + v n (1)with A denoting the state evolution matrix and v n representingthe state evolution noise at time n . The state vector is observedat node l , so that we have y l,n = H l,n x n + w l,n (2)where at time instant n and node l , y l,n is the observationand H l,n is the observation matrix, while w l,n denotes theobservation noise. Assumption . In keeping with the conventional distributedfiltering framework [9]–[11,14], all noise sequences are as-sumed to be stationary zero-mean Gaussian processes, withcovariance matrix E (cid:26)(cid:20) v n w l,n (cid:21) (cid:2) v T m w T i,m (cid:3)(cid:27) = (cid:20) Σ v
00 Σ w l δ ( l − i ) (cid:21) δ ( n − m ) . A centralized filtering operation can be achieved throughthe following operations [10,11] ˆ x n +1 = ( I − GH col ) ˆ x n + Gy col ,n (3)where G is an adaptation gain, whereas H col = col { H l : ∀ l ∈ N } and y col ,n = col { y l,n : ∀ l ∈ N } . Importantly, if { A , H col } is detectable and { A , Σ v } is stabi-lizable; then, the algebraic Riccati recursion M − n +1 = (cid:0) AM n A T + Σ v (cid:1) − + H T col R − H col (4)where R is a block-diagonal weighting matrix given as R = diag { R l : l = 1 , . . . , | N |} so that ∀ l : R l ≥ converges to a unique stabilizing solution M , so that ∀ M > n →∞ M n = M for which the adaptation gain G = MH T col R − makes theestimation error of the filtering operations in (3) into a globallystable linear system [28,29].The aim in distributed filtering is to enable each nodein the network to track the state vector sequence in (1)via observations available to the agent as described in (2)and cooperation achievable through communication only withits neighbouring nodes. In the sequel, a general frameworkfor achieving this goal is introduced. Then, its behaviour isrigorously analysed, establishing its stable operating criteria.Finally, the results are shown to be applicable to a number ofseminal distributed filtering approaches. III. D ISTRIBUTED F ILTERING F RAMEWORK
A. Distributed Filtering Operations
To achieve a distributed filtering operation, and without lossof generality, consider the following operations at each node ∀ l ∈ N : φ l,n +1 = ( I − G l H l ) A ˆ x l,n + G l y l,n ˆ x l,n +1 = X ∀ i ∈N l c l,i φ i,n +1 (5)where node l implements a local filtering operation usingadaptation gain G l ; then, combines its intermediate estimate, φ l,n , with those of its neighbours using the combinationweights { c l,i : ∀ l, i ∈ N } to arrive at the final estimate ˆ x l,n . Remark . The filtering operation in (5) is designed toencompass the seminal works in distributed Kalman filter-ing [10,14,18]. In addition, it is straightforward to simplifythe operation in (5) to describe distributed filters based ongradient descent [13,21,27].
Assumption . In keeping with the diffusion and consensus in-formation fusion setting [21,30]–[32], the combination weightsin (5) are chosen so that ∀ l ∈ N : X ∀ i ∈N l c l,i = 1 . Thus, matrix C with l th row and i th column element C { l,i } = (cid:26) c l,i if i ∈ N l if i / ∈ N l would be right-stochastic and primitive [21]. B. Stability and Convergence Analysis
The filtering operations in (5) are combined to give ˆ x l,n +1 = X ∀ i ∈N l c l,i (cid:16) ( I − G i H i ) A ˆ x i,n + G i y i,n (cid:17) . (6)Now, subtracting x l,n +1 from both sides of (6) yields ǫ l,n +1 = x n +1 − ˆ x l,n +1 (7) = x n +1 − X ∀ i ∈N l c l,i (cid:16) ( I − G i H i ) A ˆ x i,n + G i y i,n (cid:17) . Substituting x n +1 from (1) and y l,n from (2) into (7) givesthe regressive error expression ǫ l,n +1 = X ∀ i ∈N l c l,i ( I − G i H i ) A ǫ i,n − X ∀ i ∈N l c l,i G i w i,n + X ∀ i ∈N l c l,i ( I − G i H i ) v n . (8)The expression in (8) shows how the performance of node l is linked to that of its neighbours, which are in turn, linkedto their respective neighbours. Therefore, an over all analysisis only possible if the entire network is considered as unifiedfilter. To this end, the network-wide error vector is defined as E n = col { ǫ l,n : ∀ l ∈ N } . (9)After some mathematical manipulation, replacing the rows of E n in (9) with expressions in (8) gives E n +1 = C (cid:16) F E n + P ( ⊗ v n ) − GW n (cid:17) (10) where C = C ⊗ I , G = diag { G l : ∀ l ∈ N } , and F = diag { ( I − G l H l ) A : ∀ l ∈ N } (11) P = diag { ( I − G l H l ) : ∀ l ∈ N } (12) W n = col { w l,n : ∀ l ∈ N } . (13) Theorem 1. If { A , H col } is detectable and { A , Σ v } isstabilizable; then, there exists a matrix set { M l : ∀ l ∈ N } so that ∀ l ∈ N : G l = M l H T l R − l (14) makes the estimation error of the distributed filtering opera-tions in (5) globally stable.Proof of Theorem 1: Take the matrix recursions S l,n +1 = (cid:0) AM l,n A T + Σ v (cid:1) − + H T l R − l H l (15a) M − l,n +1 = X ∀ l ∈N l c l,i S i,n +1 (15b)which can be carried out by the agents in a distributedfashion. Comparing (15) to (4), and assuming, without lossof generality, that ∀ l ∈ N : 0 < M ≤ M l, , it can beconcluded that ∀ l ∈ N : 0 < M n ≤ M l,n . Proceeding on thisbasis, it follows algebraically that for ∀ l ∈ N : (cid:16) I − M l,n H T l R − l H l (cid:17) ≤ (cid:16) I − M n H T l R − l H l (cid:17) . (16)Moreover, the assumption that { A , H col } is detectable and { A , Σ v } is stabilizable, from the Riccati recursions in (4),we have p ( I − M n X ∀ l ∈N H T l R − l H l ! A ) < . (17)Thus, from (17) and (16) it follows that, ( I − M l,n H T l R − l H l ) A , would be stable on modes detectable atnode l . Subsequently, P ∀ l ∈N l c l,i (cid:16) I − M i,n H T i R − i H i (cid:17) A ,would be stable on modes detectable to nodes in N l .From Assumption 2, recall that C is primitive, and hence C is block primitive, that is, there exits m so that C m = C m ⊗ I consists of identity matrices scaled by positive non-zero real-valued numbers. Given the block diagonal structureof F , extending the statements in the previous paragraph, thereexists k for which ( CF ) k is a block matrix, where each blockconsists of an appropriate combination of the matrix set (cid:26)(cid:16)(cid:16) I − M l,n H T l R − l H l (cid:17) A (cid:17) i : ∀ l ∈ N , i = 1 , . . . , k (cid:27) so that p (cid:8) ( CF ) k (cid:9) < . Therefore, the statistical expectationof any norm, including the second-order norm, of the esti-mation error sequence { E n + k : n = 1 , , . . . } would becomeconvergent, which concludes the proof. Remark . Note that Theorem 1 broadens the scope of ourprevious results in [10] and presents the least restrictiveconvergence criteria for an all-inclusive class of distributedfilters, essentially equating convergence criteria of centralizedand distributed filtering techniques.
Theorem 2. If { A , H col } is detectable and { A , Σ v } isstabilizable; then, the gain matrices resulting from (14) and (15) make the estimates of the introduced distributed filteringoperations in (5) globally asymptotically unbiased.Proof of Theorem 2: From the recursive expression of theestimation error in (10) and conditions set on state evolutionand observation noise in Assumption 1, we have E { E n } = ( CF ) n E { E } . (18)Furthermore, in the proof of Theorem 1, it was demonstratedthat under the made assumptions, there exits a k for which ( CF ) k is a contracting operator, and therefore,as n → ∞ then ( CF ) n → . (19)As a direct result of (18) and (19), we have lim n →∞ E { E n } = which concludes the proof.To provide a more practical perspective, a reformulation of(15) is considered, so that M − l,n = X ∀ i ∈N l c l,i S i,n = X ∀ i ∈N l c l,i P − i,n + ¯ H T l R − ¯ H l (20)where ∀ l ∈ N : P l,n = AM l,n − A T + Σ v , while ¯ H l = hp C { l, } H T , . . . , p C { l, |N |} H T |N | i T . Now, using (20) and some mathematical manipulation, theoperations in (15) are rearranged as P l,n = A (cid:16) Z − l,n − + ¯ H T l R − ¯ H l (cid:17) − A T + Σ v (21a) Z − l,n = X ∀ i ∈N l c l,i P − i,n . (21b)From (21a), it is clear that P l,n is dual of the a posteriori estimate error covariance matrix of a Kalman filter with a priori estimate error covariance of Z l,n − using observations { y i,n : ∀ i ∈ N l } . Following on the same line, from (21b), Z l,n becomes the dual of the post estimate error covariance of aKalman filter using observations { y i,n : ∀ i ∈ N l } , where N l k is the k -hop neighbourhood of node l . Thus, through iteration, Z l,n + m becomes the dual of the a posteriori estimate errorcovariance of a Kalman filter using observations m [ t =0 { y i,n + k : ∀ i ∈ N l t +1 , k = 0 , . . . , m − t } . (22)If { A , H col } is detectable and { A , Σ v } is stabilizable; then,there exists m for which observation set in (22) is sufficient fortracking the state vector, and thus, { M l,n : ∀ l ∈ N } convergesto stabilizing solutions.It is prudent to note that the Riccati style recursions in (15)have their roots in the decentralized controllers in [10,15]. Thestructure in (15) is specifically designed to stabilize controllerswith the dynamics in (10), which results in its versatileapplication in this context. However, this manuscript, furthereases convergence criteria and removes the restriction for usingthe consensus framework for information fusion, accommodat-ing for both diffusion and consensus. Traditional distributed Kalman filters inspired from [9,14] that do not use (15) andare reliant on local observability conditions, will experienceunstable modes in their corresponding M l matrices. However,even in these cases, if the error dynamic follows (10) and thegain follows (14), the overall filtering operation will converge.Note that this might take some skilful coding legerdemain toensure unstable modes do not result in computation errors,which fall beyond the scope of this manuscript. On a final note,the distributed filter in (5) can be simplified to the distributedfilters based on the LMS [13,21,27] and even a wide range ofgradient-based approaches for nonlinear filters, in which case,the concepts introduced in this manuscript would hold true.IV. S IMULATION E XAMPLES
The problem of tracking a target in two dimensions isconsidered [10,14]. In this case, the state evolution function is x n = T
00 1 0 ∆ T x n − + (∆ T ) (∆ T ) ∆ T
00 ∆ T v n where E (cid:8) v n v T n (cid:9) = 0 . × I and ∆ T =0 . s representingthe sampling interval. The network shown in Fig. 1 was usedfor simulations. Excluding the node with one link sitting onthe top left, all nodes observed movements of the target on thehorizontal axis solely. The mentioned node, on the other hand,could only observe moments on the vertical axis, presentinga worst-case scenario for distributed filtering. The observationnoise covariance for all nodes was . which was used asthe weight factors R l . Fig. 1. Network of nodes and links used in simulations. The distributed Kalman filter in [14], as a representative ofclassical techniques, and the distributed Kalman filter in [10],as a representative of modern techniques, were used to trackthe state vector. The estimation error dynamics for both dis-tributed filters is presented in Fig. 2. Note that, as suggested inTheorem 1 and 2, both distributed Kalman filtering approachesexhibited error dynamics that converge to zero. This resultis specially intriguing in the case of traditional distributedKalman filtering techniques as the scenario considered heredoes not meet their original convergence criteria (see [14,21]).The range for eigenvalues of matrix set { M l,n : ∀ l ∈ N } obtained through the recursions in (15) are presented in Fig. 3.In addition, Fig. 3 includes the range of eigenvalues ofcomparable set of matrices obtained through the frameworkof [9,14,21]. Note that as proven in Theorem 1, the eigenvaluesof matrices in (15) converge and stabilize to a final value,whereas their counterparts from classical filtering frameworksdiverge. It can be shown through straightforward algebraic manipulations that the divergent modes at each node corre-spond to modes unobservable at that node, which in principle,results in these modes only being updated through the stateevolution model. This will not cause a divergent behaviour,however, as information regarding modes unobservable to onenode will be made available through the diffusion/consensusstructure as long as there exists another node in the networkto whom these modes are observable, which encapsulatesthe spirit of this manuscript in a more intuitive manner. Itshould be noted that when using classical approaches, thepresence of the unobservable modes for each node should beconsidered in the implementation phase, as the infinitely grow-ing eigenvalues can cause computational problems. However,modern approaches, such as [10,15] and that presented in thismanuscript, deal with this matter through rigorous modellingand skilful use of mathematics. Fig. 2. Error dynamics for both modern and classical distributed Kalmanfiltering techniques.Fig. 3. Range of eigenvalues for the matrix set { M l,n : ∀ l ∈ N } for bothmodern and classical distributed Kalman filtering techniques. V. C
ONCLUSION
The performance of a general class of distributed filters hasbeen considered. The analysis culminated in two theorems,which indicate that distributed filters and their centralizedduals essentially require the same convergence criteria. Thedistributed filters considered were formulated to make theobtained results applicable in a wide range of distributed filter-ing frameworks. The concepts were verified using simulationexamples on both modern and classical distributed Kalmanfilters, showing convergence is possible under much looserconditions than previously thought. R EFERENCES[1] S. P. Talebi, S. Kanna, and D. P. Mandic, “A distributed quaternionKalman filter with applications to smart grid and target tracking,”
IEEETransactions on Signal and Information Processing over Networks ,vol. 2, no. 4, pp. 477–488, December 2016.[2] N. Kashyap, S. Werner, and Y. Huang, “Decentralized PMU-assistedpower system state estimation with reduced interarea communication,”
IEEE Journal of Selected Topics in Signal Processing , vol. 12, no. 4,pp. 607–616, 2018.[3] R. Olfati-Saber, “Flocking for multi-agent dynamic systems: Algorithmsand theory,”
IEEE Transactions on Automatic Control , vol. 51, no. 3,pp. 401–420, March 2006.[4] W. Ren, R. W. Beard, and E. M. Atkins, “Information consensusin multivehicle cooperative control,”
IEEE Control Systems Magazine ,vol. 27, no. 2, pp. 71–82, 2007.[5] J. S. Shamma,
Cooperative control of distributed multi-agent systems .John Wiley & Sons, 2007.[6] S. Fattahi, G. Fazelnia, J. Lavaei, and M. Arcak, “Transformation of op-timal centralized controllers into near-globally optimal static distributedcontrollers,”
IEEE Transactions on Automatic Control , vol. 64, no. 1,pp. 66–80, Jan 2019.[7] W. Han, H. L. Trentelman, Z. Wang, and Y. Shen, “A simple approachto distributed observer design for linear systems,”
IEEE Transactions onAutomatic Control , vol. 64, no. 1, pp. 329–336, Jan 2019.[8] J. Speyer, “Computation and transmission requirements for a decentral-ized linear-quadratic-Gaussian control problem,”
IEEE Transactions onAutomatic Control , vol. 24, no. 2, pp. 266–269, April 1979.[9] R. Olfati-Saber, “Distributed Kalman filtering for sensor networks,”
InProceedings of IEEE Conference on Decision and Control , pp. 5492–5498, December 2007.[10] S. P. Talebi and S. Werner, “Distributed Kalman filtering and controlthrough embedded average consensus information fusion,”
IEEE Trans-actions on Automatic Control , vol. 64, no. 10, pp. 4396–4403, 2019.[11] R. Olfati-Saber, “Distributed Kalman filter with embedded consensusfilters,”
In Proceedings of IEEE Conference on Decision and Control ,pp. 8179–8184, December 2005.[12] F. S. Cattivelli, C. G. Lopes, and A. H. Sayed, “A diffusion RLSscheme for distributed estimation over adaptive networks,”
In Proceedingof IEEE 8th Workshop on Signal Processing Advances in WirelessCommunications , pp. 1–5, 2007.[13] F. S. Cattivelli and A. H. Sayed, “Diffusion LMS strategies for dis-tributed estimation,”
IEEE Transactions on Signal Processing , vol. 58,no. 3, pp. 1035–1048, 2010.[14] F. S. Cattivelli and A. H. Sayed, “Diffusion strategies for distributedKalman filtering and smoothing,”
IEEE Transactions on AutomaticControl , vol. 55, no. 9, pp. 2069–2084, September 2010.[15] S. P. Talebi, S. Werner, and D. Mandic, “Quaternion-valued distributedfiltering and control,”
IEEE Transactions on Automatic Control , vol. 65,no. 10, pp. 4246–4257, 2020. [16] E. Agarwal, S. Sivaranjani, V. Gupta, and P. J. Antsaklis, “Distributedsynthesis of local controllers for networked systems with arbitraryinterconnection topologies,”
IEEE Transactions on Automatic Control ,Preprint, 2020.[17] Y. Chen, S. Kar, and J. M. F. Moura, “Resilient distributed estimation:Sensor attacks,”
IEEE Transactions on Automatic Control , vol. 64, no. 9,pp. 3772–3779, 2019.[18] R. Olfati-Saber, “Kalman-consensus filter: Optimality, stability, andperformance,”
In Proceedings of IEEE Conference on Decision andControl , pp. 7036–7042, December 2009.[19] L. Wang and A. S. Morse, “A distributed observer for a time-invariantlinear system,”
In Proceedings of American Control Conference , pp.2020–2025, May 2017.[20] O. Hlinka, F. Hlawatsch, and P. M. Djuric, “Distributed particle filteringin agent networks: A survey, classification, and comparison,”
IEEESignal Processing Magazine , vol. 30, no. 1, pp. 61–81, January 2013.[21] A. H. Sayed, “Adaptation, learning, and optimization over networks,”
Foundations and Trends ® in Machine Learning , vol. 7, no. 4-5, pp. 311–801, 2014.[22] S. Werner, M. Mohammed, Yih-Fang Huang, and V. Koivunen, “De-centralized set-membership adaptive estimation for clustered sensornetworks,” pp. 3573–3576, 2008.[23] S. Werner, Y. Huang, M. L. R. de Campos, and V. Koivunen, “Dis-tributed parameter estimation with selective cooperation,” pp. 2849–2852, 2009.[24] D. Zhang, P. Shi, W. Zhang, and L. Yu, “Energy-efficient distributedfiltering in sensor networks: A unified switched system approach,” IEEETransactions on Cybernetics , vol. 47, no. 7, pp. 1618–1629, 2017.[25] Q. Liu, Z. Wang, X. He, and D. H. Zhou, “On Kalman-consensus filter-ing with random link failures over sensor networks,”
IEEE Transactionson Automatic Control , vol. 63, no. 8, pp. 2701–2708, 2018.[26] G. Battistelli, L. Chisci, G. Mugnai, A. Farina, and A. Graziano,“Consensus-based linear and nonlinear filtering,”
IEEE Transactions onAutomatic Control , vol. 60, no. 5, pp. 1410–1415, 2015.[27] C. G. Lopes and A. H. Sayed, “Diffusion least-mean squares overadaptive networks: Formulation and performance analysis,”
IEEE Trans-actions on Signal Processing , vol. 56, no. 7, pp. 3122–3136, 2008.[28] T. Kailath, A. H. Sayed, and B. Hassibi,
Linear estimation . PrenticeHall, 2000.[29] P. Kumar and P. Varaiya,
Stochastic systems: Estimation, identification,and adaptive control . Prentice Hall, New Jersey, 1986.[30] L. Xiao and S. Boyd, “Fast linear iterations for distributed averaging,”
Systems & Control Letters , vol. 53, no. 1, pp. 65–78, 2004.[31] L. Xiao, S. Boyd, and S. Lall, “A scheme for robust distributedsensor fusion based on average consensus,”
In Proceedings of FourthInternational Symposium on Information Processing in Sensor Networks ,pp. 63–70, 2005.[32] R. Olfati-Saber, J. A. Fax, and R. M. Murray, “Consensus and coop-eration in networked multi-agent systems,”