Distributed robust estimation over randomly switching networks using H ∞ consensus
aa r X i v : . [ c s . S Y ] D ec Distributed robust estimation over randomly switching networksusing H ∞ consensus ⋆ V. Ugrinovskii a a School of Engineering and IT, University of NSW at the Australian Defence Force Academy, Canberra, ACT, 2600, Australia
Abstract
The paper considers a distributed robust estimation problem over a network with Markovian randomly varying topology. The objectiveis to deal with network variations locally, by switching observer gains at affected nodes only. We propose sufficient conditions whichguarantee a suboptimal H ∞ level of relative disagreement of estimates in such observer networks. When the status of the network isknown globally, these sufficient conditions enable the network gains to be computed by solving certain LMIs. When the nodes are torely on a locally available information about the network topology, additional rank constraints are used to condition the gains, given thisinformation. The results are complemented by necessary conditions which relate properties of the interconnection graph Laplacian to themean-square detectability of the plant through measurement and interconnection channels. Key words:
Large-scale systems, distributed robust estimation, worst-case transient consensus, vector Lyapunov functions.
One of the motivations for using distributed multisensor net-works is to make the network resilient to loss of communi-cation. This has led to an extensive research into distributedfiltering over networks with time-varying, randomly switch-ing topology. In particular, the Markovian approach to theanalysis and synthesis of estimator networks has receiveda significant attention in relation to the problems involvingrandom data loss in channels with memory which are gov-erned by a Markov switching rule [17,8].In addition to capturing memory properties of physical com-munication channels, Markovian models allow for other ran-dom events in the network, such as sensor failures and re-covery, to be considered in a systematic manner within theMarkov jump systems framework. However, the Markovjump systems theory usually assumes the complete state ofthe underlying Markov chain to be known to every con-troller or filter [7] . In the context of distributed estimationand control, this requires each node of the network to knowthe complete instantaneous state of the network to be ableto deploy suitable gains. To circumvent such an unrealisticassumption, the literature focuses on networks whose com-munication state is governed by a random process decom-posable into independent two-state Markov processes de- ⋆ This work was supported by the Australian Research Council.
Email address: [email protected] (V. Ugrinovskii). scribing the status of individual links [6,17], even thoughthis typically leads to design conditions whose complex-ity grows exponentially [6]. Also, the assumption of inde-pendence between communication links may not always bepractical, e.g., when dealing with congestions.The objective of this paper is to develop a distributed fil-tering technique which overcomes the need for broadcast ofglobal communication topology and does not require Marko-vian segmentation of the network. Our main contribution isthe methodology of robust distributed observer design whichenables the node observers to be implemented in a truly dis-tributed fashion, by utilizing only locally available informa-tion about the system’s connectivity, and without assumingthe independence of communication links. This informationstructure constraint is a key distinction of this work, com-pared with the existing results, e.g., [17,6]. In addition, theproposed methodology allows to incorporate other randomevents such as sensor failures and recoveries.The paper focuses on the case where the plant to be ob-served, as well as sensing and communication models arenot known perfectly. To deal with uncertain perturbations inthe plant, sensors and communications, we employ the dis-tributed H ∞ filtering framework which has received a sig-nificant deal of attention in the recent literature [15,18,19].The motivation for considering H ∞ observers in this paper,instead of Kalman filters [17], is to obtain observers that haveguaranteed robustness properties. It is well known that thestandard Kalman filter is sensitive to modelling errors [12], Preprint submitted to Automatica 18 October 2018 nd consensus Kalman filters may potentially suffer fromthe same shortcomings. This explains our interest in robustperformance guarantees in the presence of uncertainty.In contrast to [17,15], in this paper the node estimators aresought to reach relative H ∞ consensus about the estimateof the reference plant. As an extension of the consensusestimation methodology [9], our approach responds to thechallenge posed by the presence of uncertain perturbationsin the plant, measurements and interconnections. Typically,a perfect consensus between sensors-agents is not possibledue to perturbations. To address this challenge, we employthe approach based on optimization of the transient rela-tive H ∞ consensus performance metric, originally proposedin [18]. We approach the robust consensus-based estimationproblem from the dissipativity viewpoint, using vector stor-age functions and vector supply rates [4]. This allows us toestablish both mean-square robust convergence and robustconvergence with probability 1 of the distributed filters un-der consideration and guarantee a prespecified level of H ∞ mean-square disagreement between node estimates in thepresence of perturbations and random topology changes.The information structure constraint, where the filters mustrely on the local knowledge of the network topology, posesthe main challenge in the derivation of the above-mentionedresults. The standard framework of Markov jump systems isnot directly applicable to the problem of designing locallyconstrained filters whose information about the network sta-tus is non-Markovian. To overcome this difficulty, we adoptthe approach recently proposed for decentralized control ofjump parameter systems [20]. It involves a two-step designprocedure. First, an auxiliary distributed estimation problemis solved under simplifying assumption that the completeMarkovian network topology is instantaneously available ateach node. However, we seek a solution to this problemusing a network of non-fragile estimators subject to uncer-tainty [5]. Resilience of the auxiliary estimator to uncertainperturbations is the key property to allow this auxiliary un-certain estimator network to be modified, at the second step,into an estimator network which satisfies the informationstructure constraint and retains robust performance of theauxiliary design.An important question in connection with our distributedobserver architecture is concerned with requirements onthe communication topology under which the consensus ofnode observers is achievable. For networks of one- or two-dimensional agents, and networks consisting of identicalagents, conditions for consensus are tightly related to prop-erties of the graph Laplacian matrix [10,13,22]. In a moregeneral situation involving nonidentical node observers, therole of the interconnection graph is often hidden behind thedesign conditions, e.g., see [17,15]. Our second contributionis to show that for the distributed estimation problem underconsideration to have a solution, the standard requirementfor the graph Laplacian to have a simple zero eigenvaluemust be complemented by detectability properties of certain matrix pairs formed by parameters of the observers and in-terconnections.The paper is organized as follows. The problem formula-tion is given in Section 2. Section 3 studies an auxiliary dis-tributed estimation problem without the information struc-ture constraints. The results of this section are then usedin Section 4 where the main results of the paper are given.Section 5 discusses requirements on the observer communi-cation topology. Section 6 presents an illustrating example. Notation R n is the real Euclidean n -dimensional vectorspace, with the norm k x k , ( x ′ x ) / ; ′ denotes the trans-pose of a matrix or a vector. Also, for a given P = P ′ , k x k P = √ x ′ P x . k , [1 . . . ′ ∈ R k , and I k is theidentity matrix in R k ; we will omit the subscript k whenthis causes no ambiguity. For X = X ′ , Y = Y ′ , we write Y > X ( Y ≥ X ), when Y − X is positive definite (positivesemidefinite). ⊗ denotes the Kronecker product of matrices. diag[ P , . . . , P N ] is the block-diagonal matrix, whose diag-onal blocks are P , . . . , P N . The symbol ⋆ in position ( k, l ) of a block-partitioned matrix denotes the transpose of the ( l, k ) block of the matrix. L [0 , ∞ ) is the Lebesgue spaceof R k -valued vector-functions z ( · ) , defined on [0 , ∞ ) , withthe norm k z k , (cid:0)R ∞ k z ( t ) k dt (cid:1) / . Consider a directed weakly connected graph G = ( V , E ) ,where V = { , . . . , N } is the set of nodes, and E ⊆ V × V is the set of edges. The edge ( j, i ) originating at node j andending at node i represents the event “ j transmits informa-tion to i ”. In accordance with a common convention, weconsider graphs without self-loops, i.e., ( i, i ) E . However,each node is assumed to have complete information aboutits filter, measurements and the status of incoming commu-nication links.We consider two types of random events at each node.Firstly, node neighborhoods change randomly as a result ofrandom link dropouts and recovery. Also, to account forsensor adjustments in response to these changes, as well assensor failures/recoveries, we allow for random variations ofthe sensing regime at each node. Letting x ( t ) , y i ( t ) denotean observed process and its measurement taken at node i attime t , and using a standard linear relation between thesequantities y i = ˜ C i x + ˜ D i ξ + ˜¯ D i ξ i , y i ∈ R r , (1)such adjustments are associated with randomly varying co-efficients ˜ C i , ˜ D i , ˜¯ D i . These random events are additional tolink dropouts. This leads us to consider the combined evo-lution of each node’s neighbourhood and sensing regime.2 efinition 1 For a node i , let V i , ( ˜ C i , ˜ D i , ˜¯ D i ) be its neigh-bourhood set and the measurement matrix triplet, respec-tively, at a certain time t . The pair { V i , ( ˜ C i , ˜ D i , ˜¯ D i ) } , issaid to represent the local communication and sensing state (or simply the local state ) of node i at time t . Two states of i at times t , t , { V i , ( ˜ C i , ˜ D i , ˜¯ D i ) } , { V i , ( ˜ C i , ˜ D i , ˜¯ D i ) } are distinct if V i = V i , or ( ˜ C i , ˜ D i , ˜¯ D i ) = ( ˜ C i , ˜ D i , ˜¯ D i ) . From now on, we associate with every node i the orderedcollection of all its feasible distinct local states and denotethe corresponding index I i , { , . . . , M i } . The time evo-lution of each local state will be represented by a randommapping η i : [0 , ∞ ) → I i .The global configuration and sensing pattern of the networkat any time can be uniquely determined from its local states.This leads us to define the global state of the network asan N -tuple ( k , . . . , k N ) , where k i ∈ I i . Consider the or-dered collection of all feasible global states of the networkand let I = { , . . . , M } denote its index set. In general,not all combinations of local states correspond to feasibleglobal states. Owing to dependencies between network linksand/or sensing regimes, the number of feasible global statesmay be substantially smaller than the cardinality of the set I × . . . × I N of all combinations of local states. The one-to-one mapping between the set of feasible global states { ( k , . . . , k N ) } and its index set I will be denoted Φ , i.e., ( k , . . . , k N ) = Φ( m ) , where m is the index of the N -tuple ( k , . . . , k N ) . Also, we write k i = Φ i ( m ) , whenever ( k , . . . , k N ) = Φ( m ) .Using the one-to-one mapping Φ , define the global pro-cess η ( t ) = Φ − ( η ( t ) , . . . , η N ( t )) to describe the evolu-tion of the network global state. The local state processes η i ( t ) are related to it as η i ( t ) = Φ i ( η ( t )) ∀ t ≥ . Through-out the paper, we assume that { η ( t ) , t ≥ } is a station-ary Markov random process [0 , ∞ ) → I defined in a fil-tered probability space (Ω , F , { F t } , P ) , where F t denotesa right-continuous filtration with respect to which { η ( t ) , t ≥ } is adapted [1]. The σ -algebra F is the minimal σ -algebra which contains all measurable sets from the filtration { F t , t ≥ } . The transition probability rate matrix of theMarkov chain { η ( t ) , t ≥ } will be denoted Λ = [ λ kl ] Mk,l =1 ,with λ kl ≥ , k = l and λ kk = − P l = k λ kl ≤ , ∀ k ∈ I [1].Using the global state process η ( t ) , the time evolution of thecommunication graph can be represented by a random graph-valued process G η ( t ) , whose value at every time instance isa directed subgraph of G . It is assumed that for all t , G η ( t ) is weakly connected and has the same vertex set as G . When η ( t ) = k ∈ I , A k = [ a kij ] i,j =1 ,N will denote the adjacencymatrix of the digraph G k = G η ( t ) . Note that a kij = 1 if In the sequel, we will consider the filtration generated by acomposite Markov process consisting of η and error dynamics ofthe estimator introduced in the next section. and only if j ∈ V Φ i ( k ) i . Here and hereafter, the symbol V k i i describes the neighbourhood of node i when this node is inlocal state k i . In accordance with this notation, V Φ i ( k ) i is theneighbourhood of node i when the network is in global state k . Also, p ki = P Nj =1 a kij , q ki = P Nj =1 a kji , and L k denotethe in- and out-degrees of node i and the Laplacian matrixof the corresponding graph G k , respectively.We will use the notation ( η, G , Φ) to refer to the switchingnetwork described above. Since η ( t ) is stationary, then eachprocess η i ( t ) is also stationary. However, in general the localstate processes η i ( t ) are not Markov, and the components ofthe multivariate process ( η ( t ) , . . . , η M ( t )) may statisticallydepend on each other. Hence our network model allows fordependencies between links within the network. H ∞ consensus Consider a plant described by the equation ˙ x = Ax + B ξ ( t ) , x (0) = x . (2)Here x ∈ R n is the state, ξ ( t ) ∈ R l is a deterministicdisturbance. We assume that ξ ( · ) ∈ L [0 , ∞ ) , and that thesolution of (2) exists on any finite interval [0 , T ] , and is L -integrable on [0 , T ] .Also, consider an observer network { η, G , Φ } whose nodestake measurements of the plant (2) as follows y i = ˜ C η i ( t ) i x + ˜ D η i ( t ) i ξ + ˜¯ D iη i ( t ) ξ i , y i ∈ R r i , (3)where ξ i ( t ) ∈ R l i represents the deterministic measurementuncertainty at sensing node i , ξ i ( · ) ∈ L [0 , ∞ ) . The coeffi-cients of equation (3) take values in given sets of constantmatrices of compatible dimensions, ( ˜ C η i ( t ) i , ˜ D η i ( t ) i , ˜¯ D iη i ( t ) ) ∈ { ( ˜ C ki , ˜ D ki , ˜¯ D ki ) , k ∈ I i } . It will be assumed throughout the paper that ˜ E ki =˜ D ki ( ˜ D ki ) ′ + ˜¯ D ki ( ˜¯ D ki ) ′ > for all i and k ∈ I i .The measurements y i are processed at node i according tothe following estimation algorithm (cf. [17,18,19]): ˙ˆ x i = A ˆ x i + ˜ L η i ( t ) i ( y i ( t ) − ˜ C η i ( t ) i ˆ x i )+ X j ∈ V ηi ( t ) i ˜ K η i ( t ) ij ( v ij − H ij ˆ x i ) , ˆ x i (0) = 0 , (4)where v ij is the signal received at node i from node j , v ij = H ij ˆ x j + G ij w ij , v ij ∈ R r ij , (5) w ij ∈ R s ij describes the channel uncertainty affecting theinformation transmission from node j to i . It is assumed3hat w ij belongs to the class of mean-square L -integrablerandom disturbances, adapted to the filtration { F t , t ≥ } .It will be further assumed that F ij = G ij G ′ ij > for all i and j ∈ V k i i , k i ∈ I i . Also in (4), ˜ L η i ( · ) i , ˜ K η i ( · ) ij are matrix-valued functions of the local state process η i ( t ) . These func-tions are the design parameters of the algorithm describinginnovation and interconnection gains of the observer (4).Note that the coupling and observer gains ˜ K ( · ) ij , ˜ L ( · ) i are re-quired to be functions of the local state (i.e., functions of η i ), rather than the global state. This ‘locality’ informationstructure constraint is additional to the assumption aboutthe Markov nature of the communication graph; cf. [17]where the complete communication graph was assumed tobe known at each node. The problem in this paper is to de-termine these functions to satisfy certain robust performancecriteria to be presented in Definition 2 below. Remark 1
In equation (5), the matrices H ij ∈ R r i × n and G ij ∈ R r i × s ij do not depend on η i ( t ) . This is to reflect asituation where node j always broadcasts its information tonode i , but node i randomly fails to receive this information,or chooses not to accept it, e.g. due to random congestion.It is possible to consider a more general situation where thematrices H ij and G ij also depend on η i ( t ) . Technically, thismore general case is no different from the one pursued here. Associated with the system (2) and the set of filters (4) isthe disagreement function (cf. [10]) Ψ k (ˆ x ) = 1 N N X i =1 X j ∈ V Φ i ( k ) k ˆ x j − ˆ x i k , k ∈ I , (6) ˆ x , [ˆ x ′ . . . ˆ x ′ N ] ′ . It represents the average (over the set ofall nodes) of the total disagreement between the estimate ateach node, and the estimates computed at the neighbouringnodes, when the network is in state k . Following [18], weadopt Ψ k (ˆ x ) to define the transient consensus performancemetric in the distributed estimation problem defined below.Let P x ,m , E x ,m denote the conditional probability andconditional expectation, given x (0) − ˆ x i (0) = x ∀ i , η (0) = m . Also, given a matrix P = P ′ > , let µ P ( x , ξ, [ ξ i , w ij ] i,j =1 ,...,N ) , k x k P + k ξ k + 1 N N X i =1 (cid:18) k ξ i k + M X j =1 E x ,m k a η ( · ) ij w ij k (cid:19) . Definition 2
The distributed estimation problem under con-sideration is to determine switching observer gains ˜ L ki andinterconnection coupling gains ˜ K kij , k ∈ I i , for the filters(4) which ensure that the following conditions are satisfied:(i) In the absence of the uncertainty, all node estimatorsconverge exponentially in the mean-square sense and converge asymptotically with probability 1: E x ,m k ˆ x i ( t ) − x ( t ) k ≤ ce − ǫt , ( ∃ c, ǫ > , P x ,m ( lim t →∞ k ˆ x i ( t ) − x ( t ) k = 0) = 1 . (ii) Given a constant γ > , the following mean-square H ∞ consensus performance is guaranteed sup x , ( ξ,ξ i ,w ij ) =0 E x ,m R ∞ Ψ η ( t ) (ˆ x ( t )) dtµ P ( x , ξ, [ ξ i , w ij ] i,j =1 ,...,N ) ≤ γ . (7) (iii) All estimators converge in the mean-square and withprobability 1: E x ,m Z ∞ k x ( t ) − ˆ x i ( t ) k dt < ∞ , (8) P x ,m ( lim t →∞ k x ( t ) − ˆ x i ( t ) k = 0) = 1 . (9)Properties (8) and (9) refer to different types of asymp-totic behaviour of the estimation errors. Condition (8)states that ˆ x i ( t ) converges to x ( t ) in the mean-square L sense. From the Chebyshev inequality, this also implies that lim R →∞ P x ,m (cid:0)R ∞ k x ( t ) − ˆ x i ( t ) k dt > R (cid:1) = 0 , thatis, almost all estimator trajectories converge in L sense.Property (9) states that k x ( t ) − ˆ x i ( t ) k converges to zeroasymptotically for almost all realizations of the global stateprocess η ( t ) . This is a stronger property; in general, it doesnot follow from the a.s. L convergence. For that reason,both convergence properties are considered in Definition 2. In this section, we temporarily lift the locality informationstructure constraint and assume the global communicationand sensing state process η ( t ) to be available at every node.For every k ∈ I define C ki = ˜ C Φ i ( k ) i D ki = ˜ D Φ i ( k ) i , ¯ D ki =˜¯ D Φ i ( k ) i . Note that E ki , D ki ( D ki ) ′ + ¯ D ki ( ¯ D ki ) ′ = ˜ E Φ i ( k ) i > . Then, the measurements taken at node i can be rewrittenin terms of the global state process η ( t ) : y i = C η ( t ) i x + D η ( t ) i ξ + ¯ D η ( t ) i ξ i . (10)The auxiliary problem in this section is concerned with es-timation of the state of the uncertain plant (2), (10) using anetwork of estimators subject to uncertainty, as follows ˙ˆ x i = A ˆ x i + L η ( t ) i ( y i ( t ) − C i ( η ( t ))ˆ x i )+ X j ∈ V Φ i ( η ( t )) K η ( t ) ij ( v ij − H ij ˆ x i )+ X j ∈ V Φ i ( η ( t )) ( ω (1) ij + ω (2) ij ) + ω i , ˆ x i (0) = 0 . (11)4ere, L ( · ) i , K ( · ) ij are matrix-valued functions of the stateof the global Markov chain η to be found, and ω (1) ij , ω (2) ij ,and ω i are estimator perturbations. It is assumed that theseperturbations are random processes adapted to the filtra-tion { F t , t ≥ } and such that the multivariate process (ˆ x , . . . , ˆ x N , η ) is Markov with respect to that filtration.Also in this section, it will be assumed that these uncertain-ties satisfy the following norm-bound conditions: k ω i ( t ) k ≤ α i (cid:13)(cid:13)(cid:13) C η ( t ) i e i ( t ) + D η ( t ) i ξ ( t ) + ¯ D η ( t ) i ξ i ( t ) (cid:13)(cid:13)(cid:13) , k ω (1) ij ( t ) k ≤ β ij k H ij e i ( t ) + G ij w ij k , k ω (2) ij ( t ) k ≤ β ij k H ij e j ( t ) k a.s. ∀ t ≥ , (12)where α i , β ij are given constants, and e i = x − ˆ x i is theestimation error of the auxiliary estimator at node i , whichevolves according to the equations ˙ e i = ( A − L η ( t ) i C η ( t ) i ) e i + X j ∈ V Φ i ( η ( t )) i K η ( t ) ij ( H ij ( e j − e i ) − G ij w ij )+ ( B − L η ( t ) i D η ( t ) i ) ξ − L η ( t ) i ¯ D η ( t ) i ξ i − X j ∈ V Φ i ( η ( t )) ( w (1) ij + w (2) ij ) + w i , e i (0) = x . (13)It will be shown in Section 4 that when the locality informa-tion structure constraints are imposed, this will result in anuncertainty due to the mismatch between filter error dynam-ics in the network subject to these constraints, and the errorswhich would arise in the same network if its communica-tion state was known globally. It will be shown in the proofof Theorem 2 that this uncertainty satisfies conditions (12).The resilience of the constraint-free auxiliary network (11)to this uncertainty will be used in the next section to showthat the network (4), constructed from the auxiliary solution,maintains the same convergence and robust H ∞ consensusperformance properties when the information structure con-straints are enforced. Definition 3
The auxiliary distributed consensus estimationproblem is to determine sets of gains L ki , K kij , k ∈ I , forthe filters (11) to ensure the following:(i) When ( ξ, ξ i , w ij ) ≡ , the interconnected system con-sisting of subsystems (13) must be exponentially sta-ble in the mean-square sense and asymptotically stablewith probability 1 for all estimator perturbations ω (1) ij , ω (2) ij , and ω i for which the correspondingly modifiedconstraints (12) hold.(ii) In the presence of exogenous disturbances ξ, ξ i , w ij ,the mean-square consensus performance condition in(7) is satisfied for all admissible estimator perturba-tions ω (1) ij , ω (2) ij , and ω i subject to (12). (iii) All estimators converge in the mean-square and withprobability 1; i.e., conditions (8), (9) hold. A solution to this auxiliary problem is given in Theorem 1below. The conditions of the theorem involve the followinglinear matrix inequalities in the variables τ ki > , θ kij > , ϑ kij > , X ki = ( X ki ) ′ > , i = 1 , . . . , N , k ∈ I , j ∈ V Φ i ( k ) i : γ I − τ ki α i E ki > , γ I − θ kij β ij F ij > , (14) Q ki ⋆ ⋆ ⋆ ⋆N ki − γ I ⋆ ⋆ ⋆S ki − γ I ⋆ ⋆ M i ⊗ X ki − T i ⋆ Ξ ′ i − Z i < , (15)where N ki , (cid:0) I − ( D ki ) ′ ( E ki ) − D ki (cid:1) B ′ X ki ,S ki , − ( ¯ D ki ) ′ ( E ki ) − D ki B ′ X ki , T ki , diag (cid:20) τ ki , θ ki,j , . . . , θ ki,j pki , ϑ ki,j , . . . , ϑ ki,j pki (cid:21) ,Q ki , X ki ( A + δ i I − B ( D ki ) ′ ( E ki ) − C ki )+ ( A + δ i I − B ( D ki ) ′ ( E ki ) − C ki ) ′ X ki + ( p ki + q ki ) I + X j : i ∈ V Φ j ( k ) j ϑ kji β ji H ′ ji H ji + M X l =1 λ kl X li − γ ( C ki ) ′ ( E ki ) − C ki − γ X j ∈ V Φ i ( k ) i H ′ ij F − ij H ij , Ξ i = (cid:20) γ H ′ ij F − ij H ij − I . . . γ H ′ ij pki F − ij pki H ij pki − I (cid:21) ,Z i = diag δ j q kj + 1 X kj , . . . , δ j pki q kj pki + 1 X kj pki . Theorem 1
Suppose the network ( η, G , Φ) and the con-stants γ > , α i , β ij and δ i > are such that the cou-pled LMIs (14) and (15) in the variables τ ki > , θ kij > , ϑ kij > , X ki = ( X ki ) ′ > , j ∈ V Φ i ( k ) i , i = 1 , . . . , N , k ∈ I , are feasible. Then the network of observers (11) with K kij = γ ( X ki ) − H ′ ij F − ij ,L ki = (cid:2) γ ( X ki ) − ( C ki ) ′ + B ( D ki ) ′ (cid:3) ( E ki ) − (16) solves the auxiliary estimation problem in Definition 3. Thematrix P in condition (7) corresponding to this solution is P = γ N P Ni =1 X m i , where m = η (0) . The proof of Theorem 1 is given in the Appendix.5 .2 Special case: Broadcast of the global state
When the global Markov state of the network is availableat every node, the solution to the distributed H ∞ consensusestimation problem can be obtained from Theorem 1 byletting ω i = ω (1) ij = ω (2) ij = 0 and taking α i = β ij = 0 . Corollary 1
Suppose the network ( η, G , Φ) and the con-stants γ > and δ i > are such that the following cou-pled LMIs in the variables X ki = ( X ki ) ′ > , j ∈ V Φ i ( k ) i , i = 1 , . . . , N , k ∈ I , are feasible: ¯ Q ki ⋆ ⋆ ⋆N ki − γ I ⋆ ⋆S ki − γ I ⋆ Ξ ′ i − Z i < , (17) ¯ Q ki , X ki ( A + δ i I − B ( D ki ) ′ ( E ki ) − C ki )+( A + δ i I − B ( D ki ) ′ ( E ki ) − C ki ) ′ X ki + ( p ki + q ki ) I + M X l =1 λ kl X li − γ ( C ki ) ′ ( E ki ) − C ki − γ X j ∈ V Φ i ( k ) i H ′ ij F − ij H ij . Then the network of observers (11) with ω i = ω (1) ij = ω (2) ij =0 and K kij , L ki defined in (16) solves the estimation problemin Definition 3. The matrix P in condition (7) correspondingto this solution is P = Nγ P Ni =1 X m i , where m = η (0) . In this section, the solution to the auxiliary distributed esti-mation problem developed in Section 3 will be used to ob-tain a distributed estimator whose nodes utilize only locallyavailable information. This will be achieved by taking theasymptotic conditional expectation of the auxiliary gains,given a local state. Our method is based on the followingtechnical result of [20].
Lemma 1
Suppose the Markov process η ( t ) is irreducibleand has a unique invariant distribution ¯ λ . Given a matrix-valued function K ( · ) : I → { K . . . , K M } ⊂ R n × s , forevery node i and for all k i ∈ I i we have: lim t →∞ E (cid:16) K η ( t ) | η i ( t ) = k i (cid:17) = P l :Φ i ( l )= k i ¯ λ l K l P l :Φ i ( l )= k i ¯ λ l . (18)Now let K kij , L ki , k ∈ I , be the coefficients of the aux-iliary distributed estimator obtained in Theorem 1. UsingLemma 1, for each i = 1 , . . . , N and k i ∈ I i we define ˜ K k i ij = P l :Φ i ( l )= k i ¯ λ l K lij P l :Φ i ( l )= k i ¯ λ l , ˜ L k i i = P l :Φ i ( l )= k i ¯ λ l L li P l :Φ i ( l )= k i ¯ λ l . (19) From Lemma 1, the processes ˜ K η i ( t ) ij , ˜ L η i ( t ) i are thenthe asymptotic minimum variance approximations of thecorresponding processes K η ( t ) ij , L η ( t ) i . However, unlike K η ( t ) ij , L η ( t ) i , the evolution of ˜ K η i ( t ) ij , ˜ L η i ( t ) i is governed bythe local communication and sensing state process η i .To formulate the main result of this paper, consider the col-lection of the LMIs in the variables τ ki , θ kij , ϑ kij , X ki and Y ki , consisting of the LMIs (14), (15), and the followingadditional LMIs, " α i I ∆ L,ki (∆ L,ki ) ′ I > , " β ij I ∆ K,kij (∆ K,kij ) ′ I > , (20)where α i , β ij are the same constants as those employed inthe LMIs (14), (15), and ∆ L,ki , P l : l = k, Φ i ( l )=Φ i ( k ) γ ¯ λ l (cid:2) Y ki ( C ki ) ′ ( E ki ) − − Y li ( C li ) ′ ( E li ) − (cid:3)P l :Φ i ( l )=Φ i ( k ) ¯ λ l , ∆ K,kij , P l : l = k, Φ i ( l )=Φ i ( k ) γ ¯ λ l (cid:2) Y ki − Y li (cid:3) H ′ ij F − ij P l :Φ i ( l )=Φ i ( k ) ¯ λ l . Also, consider the rank constraints rank " Y ki II X ki ≤ n, (21) Theorem 2
Given a Markovian switching network ( η, G , Φ) and a collection of constants γ , α i , β ij and δ i > , i = 1 , . . . , N , associated with each node, suppose thereexist matrices X ki = ( X ki ) ′ > , Y ki = ( Y ki ) ′ > , and pos-itive scalars τ ki , θ kij , ϑ kij , i = 1 , . . . , N , k ∈ I , j ∈ V Φ i ( k ) i which satisfy the matrix inequalities (14), (15), (20), andthe rank constraint (21). Using the solution matrices Y ki ,define the auxiliary gains K kij = γ Y ki H ′ ij F − ij ,L ki = (cid:2) γ Y ki ( C ki ) ′ + B ( D ki ) ′ (cid:3) ( E ki ) − . (22) Next, using (19) and (22), construct the estimator network(4). The resulting distributed estimatior network solves thedistributed robust estimation problem in Definition 2.Proof
The result follows from Theorem 1 in a mannersimilar to the proof of Theorem 4 in [20].First we observe that the observer gains K kij , L ki constructedin this theorem, also satisfy the conditions of Theorem 1,since ( X ki ) − = Y ki in view of (21). This allows us to claimthat the network of auxiliary estimators (11), (22) solves theauxiliary robust estimation problem in Definition 3.6ext, consider the observer gains defined using (19) and(22). Note that for all i = 1 , . . . , N , k ∈ I , and j ∈ V Φ i ( k ) i , K kij − ˜ K Φ i ( k ) ij = P l : l = k, Φ i ( l )=Φ i ( k ) ¯ λ l (cid:2) K kij − K lij (cid:3)P l :Φ i ( l )=Φ i ( k ) ¯ λ l ,L ki − ˜ L Φ i ( k ) i = P l : l = k, Φ i ( l )=Φ i ( k ) ¯ λ l (cid:2) L ki − L li (cid:3)P l :Φ i ( l )=Φ i ( k ) ¯ λ l , Then it follows from (20) that k ˜ L Φ i ( k ) i − L ki k ≤ α i , k ˜ K Φ i ( k ) ij − K kij k ≤ β ij . (23)Therefore the particular perturbations in the estimators (11), ω i = ( ˜ L η i ( t ) i − L η ( t ) i )( C η ( t ) i e i ( t ) + D η ( t ) i ξ + ¯ D η ( t ) i ξ i ) ,ω (1) ij = ( ˜ K η i ( t ) ij − K η ( t ) ij )( H ij e i + G ij w ij ) ,ω (2) ij = − ( ˜ K η i ( t ) ij − K η ( t ) ij ) H ij e j , (24)satisfy (12). This means that the estimator (4) in which theabove particular set of gains ˜ K η i ( t ) ij , ˜ L η i ( t ) i is employed, rep-resents one instance of the auxiliary estimator (11), corre-sponding to the particular perturbation (24), which is an ad-missible perturbation, due to (12). Therefore, since the ma-trices K ki , L ki , m ∈ I solve the auxiliary H ∞ consensusestimation problem in Definition 3, then the distributed es-timator (4) with the local gains selected above, solves therobust consensus estimation problem in Definition 2. ✷ Remark 2
Due to the rank constraints (21) , the solutionset to the matrix inequalities in Theorem 2 is non-convex.In general, it is difficult to solve such problems. Fortunately,several numerical algorithms have been proposed for thispurpose [3,11].
In this section, we briefly discuss necessary requirementson the network topology. Recall that condition (i) of Defini-tion 2 requires that in the absence of perturbations, estima-tion error dynamics must be asymptotically stabilizable viaoutput injection in the mean-square. This problem belongsto the class of stochastic mean-square detectability prob-lems for linear jump parameter systems [2]. Unfortunately,even without the locality information structure constraint,there is no easy direct algebraic test to verify this property.Some conclusions can however be drawn to provide an in-sight into the connection between the graph Laplacian andthe existence of stabilizing output injection gains.To highlight such a connection, in this section we will makethe simplifying assumption H ij = H , G ij = G i , r ij =¯ r i for all j ∈ V Φ i ( k ) i . From (22), it follows that in this case ˜ K k i ij does not depend on j . Hence we will also assume ˜ K k i ij = ˜ K k i i . Define ¯ A = I N ⊗ A , A k , A + λ kk I , ¯ A k , ¯ A + λ kk I nN , ¯ H k = L k ⊗ H .Let C ki , ¯ O k , O H denote the undetectable subspace of ( C ki , A k ) and the unobservable subspaces of ( ¯ H k , ¯ A k ) and ( H, A ) , respectively. The following theorem shows that forthe problem in Definition 2 to have a solution, every com-bination of undetectable states of the pairs ( C ki , A k ) mustnecessarily form an observable vector of ( ¯ H k , ¯ A k ) . Theproofs of this and subsequent results are removed for thesake of brevity. Theorem 3
Suppose there exist output injection gains ˜ L k i i , ˜ K k i ij = ˜ K k i i , j ∈ V k i i , k i ∈ I i , i = 1 , . . . , N , such that thefirst condition in Definition 2(i) holds. Then, ¯ O k ∩ N Y i =1 C ki = { } ∀ k ∈ I . (25)We now present a necessary and sufficient condition for (25)to hold. The sufficient condition is explicitly expressed interms of the network Laplacian matrices L k . Theorem 4 (a) If (25) holds, then for every k ∈ I :(i) T Ni =1 C ki = { } , and(ii) O H ∩ C ki = { } for all i = 1 , . . . , N ;(b) Conversely, suppose the geometric multiplicity of thezero eigenvalue of L k is equal to 1. If the above prop-erties (i) and (ii) hold for every k , then (25) is satisfied. One can further specialize the sufficient conditions in The-orem 4, e.g., for the cases of a balanced graph or a graphcontaining a spanning tree. Also, note that the informationstructure constraint is not used in the proofs. Therefore, theresults in this section apply to more general distributed esti-mation problems, such as the auxiliary problem consideredin Section 3.
Consider a plant of the form (2), with A = h − . − − .
87 0 i , B = h − . − . . i . The nominal part of the plant describesone of the regimes of the so-called Chua electronic circuit.The Chua circuit is an example of a system which switchesbetween three regimes of operation in a chaotic fashion. Forthe sake of simplicity, here we consider only one regime.The plant is observed by a 5-node switching observer net-work which operates intermittently in two regimes. Its graphtopologies are shown in Figure 1. The evolution of the net-work is modelled as a two-state Markov chain with the tran-sition probability rate matrix Λ = (cid:2) − . . . − . (cid:3) . Note that the7Sfrag replacements 11 22 33 44 55 k = 1 k = 2 Fig. 1. Switching graph topology for the example.Table 1Coefficients C ki for the example, C ∗ = 10 − × [3 . − . , C ∗ = [ − . . − . . i = 1 i = 2 i = 3 i = 4 i = 5 k = 1 C ∗ C ∗ C ∗ C ∗ C ∗ k = 2 C ∗ C ∗ C ∗ C ∗ C ∗ graph corresponding to state k = 2 was used in [18,21] todemonstrate synchronization of Chua systems. Indeed, thefilters share the same matrix A as the plant, and can be inter-preted as ‘slave’ Chua systems operating in the same regimeas the master. Accordingly, the convergence of the filters inour example can be interpreted as the observer-based syn-chronization between the slaves and the master; see [18] forfurther details. However different from [18], in this examplethe graph topology is time-varying, as explained below.From Figure 1, nodes 3, 4, and 5 have varying neighbour-hoods. Also, in this example we suppose that node 2 changesits sensor parameters when the network switches betweentwo configurations. As a result, in this example, each lo-cal state process, except for that of node 1, has two statesand always takes the same value as the global state process.On the other hand, node 1 always maintains the same localstate, and its local process is constant. Therefore, we seekto obtain nonswitching observer gains for node 1 only. Ac-cording to this description, in this example, I = I = I = I = I = { , } , I = { } , and the mapping Φ isas follows: Φ(1) = (1 , , , , , Φ(2) = (1 , , , , . Nu-merical values of the matrices C ki , k = 1 , , for this exam-ple are given in Table 1; they are assumed to take one of thetwo values C ∗ , C ∗ , shown in the table. These values werechosen so that the pairs ( C k , A + λ kk I ) , ( C k , A + λ kk I ) , k = 1 , , corresponding to nodes 1 and 4, had undetectablemodes, while node 2 was allowed to switch between de-tectable and undetectable coefficient pairs. Therefore, forestimation these nodes were to rely on communication withtheir neighbours. Also, we let D ki = 0 , ¯ D ki = 0 . for allnodes and all k , and H ij = I × , G ij = 0 . × I × .Note that both instances of the network have spanning treeswith roots at nodes 3 and 5. These nodes have detectablematrix pairs ( C k , A + λ kk I ) , ( C k , A + λ kk I ) , k = 1 , ,respectively. Also, ( H, A ) is observable. It follows fromthese properties that the conditions in part (b) of Theorem 4are satisfied. Hence, the necessary condition for global de-tectability, stated in Theorem 3 holds. η ( t ) (a) F il t e r e rr o r Filter error at Node 1Filter error at Node 2Filter error at Node 5 (b)Fig. 2. One path of η ( t ) (Fig. (a)) and estimations errors for thefirst coordinate at nodes 1, 2 and 5 (Fig. (b)). The design of the observer network was carried out us-ing Matlab and the LMI solver LMIrank based on [11]. Toobtain a set of non-switching gains for node 1, the norm-bounded uncertainty constraints of the form (23) were de-fined for the communication link (3 , at node 1, where weset α = 10 , β = 4 × . These constants as well as δ i = 0 . were chosen by trial and error, to ensure that thecorresponding rank constrained LMIs in Theorem 2 werefeasible. The feasibility was achieved with γ = 0 . .This allowed us to compute the nonswitching gains ˜ K and ˜ L for node 1 using (22).To validate the design, the system and the designed filterswere simulated numerically, with a random initial condi-tion x . All uncertain perturbations were chosen to be ofthe form sin( aπt + ϕ ) e − bt , with different coefficients a, ϕ and b for each perturbation. Also we let w ij ( t ) = w ji ( t ) ,assuming an undirected nature of the channels in this exam-ple. The graphs of one realization of the global state process η ( t ) , and the corresponding estimation errors at nodes 1 (thenonswitching filter), 2 (the filter with the switching sensingregime) and 5 (the filter with the varying neighbourhood)are shown in Figures 2(a) and 2(b), respectively. The graphin Figure 2(b) confirms the ability of the proposed node es-timators to successfully mitigate the changes in the graphtopology and sensing regimes, as well as uncertain pertur-bations in the plant, measurements and interconnections. The paper has presented sufficient conditions for the synthe-sis of robust distributed consensus estimators connected over8 Markovian network. The proposed estimator provides aguaranteed suboptimal H ∞ disagreement of estimates, whileusing only locally available information about the commu-nication and sensing state of the network. Our conditionsallow a robust filter network to be constructed by solving anLMI feasibility problem. The LMIs are partitioned in a waywhich opens a possibility for solving them in a decentral-ized manner. When the network’s global state is available atevery node, this feasibility problem is convex, and the cor-responding LMIs are solvable, e.g., using the decentralizedgradient descent algorithm in [18]. However, the elimina-tion of the network state broadcast has led to the introduc-tion of rank constraints additional to the LMI conditions.Therefore, new numerical algorithms need to be developedto exploit the proposed partition of the LMIs and rank con-straints. This problem is left for future research. Other pos-sible directions for future research may be concerned withan integration of our approach with other distributed H ∞ filtering techniques, such as for example, techniques involv-ing randomly sampled measurements [16]. Acknowledgement
Discussions with C. Langbort are gratefully acknowledged.
The following continuous-time counterpart of the Robbins-Siegmund convergence theorem [14] will be used in theproof of Theorem 1. Its proof is similar to [14].
Lemma 2
Consider nonnegative random processes v ( t ) , φ ( t ) and ψ ( t ) adapted to a filtration { ¯ F t , t ≥ } , with thefollowing properties:(a) v ( t ) is right-continuous on [0 , ∞ ) ;(b) ψ ( t ) is locally Lebesgue-integrable on [0 , ∞ ) with prob-ability 1, i.e., almost all realizations of ψ ( t ) have theproperty R ts ψ ( θ ) dθ < ∞ for all t ≥ s ≥ ;(c) E R ∞ φ ( s ) ds < ∞ ;(d) The following inequality holds a.s. for all t ≥ s ≥ E (cid:20) v ( t ) + Z ts ψ ( θ ) dθ (cid:12)(cid:12) ¯ F s (cid:21) ≤ v ( s ) + E (cid:20)Z ts φ ( θ ) dθ (cid:12)(cid:12) ¯ F s (cid:21) . (26) Then the limit lim t →∞ v ( t ) exists and is finite with proba-bility 1. Also, R ∞ ψ ( s ) ds < ∞ a.s.. Proof of Theorem 1
We will use the notation k i = Φ i ( k ) , k j = Φ j ( k ) , where k ∈ I , k i ∈ I i , k j ∈ I j . Also, ˆ D ki = [ D ki ¯ D ki ] , ˆ B = [ B , ˆ ξ i = [ ξ ′ ξ ′ i ] ′ .Let L denote the infinitesimal generator of the intercon-nected system consisting of subsystems (13) [1]. Con-sider the vector Lyapunov candidate for this system, [ V ( e , k ) . . . V N ( e N , k )] ′ , with quadratic components V i ( e i , k ) = e ′ i X ki e i . Also, define V ( e, k ) = P Ni =1 V i ( e i , k ) .Since L is a linear operator and ∂V i ∂e j = 0 for j = i , then [ L V ]( e, k ) = P mi =1 [ L i V i ]( e, k ) , where [ L i V i ]( e, k ) , X Ml =1 λ kl V i ( e i , l ) + (cid:18) ∂V i ∂e i (cid:19) T (cid:16) ( A − L ki C ki ) e i + X j ∈ V kii K kij ( H ij ( e j − e i ) − G ij w ij )+( ˆ B − L ki ˆ D ki ) ˆ ξ i − ω i − X j ∈ V kii ( ω (1) ij + ω (2) ij ) (cid:17) . For arbitrary τ ki , θ kij , ϑ kij > , consider the expression [ L V ]( e, k ) + X Ni =1 h τ ki ( α i k C ki e i + ˆ D ki ˆ ξ i k − k ω i k )+ X j ∈ V kii θ kij ( β ij k H ij e i + G ij w ij k − k ω (1) ij k )+ X j ∈ V kii ϑ kij ( β ij k H ij e j k − k ω (2) ij k ) i = X Ni =1 R i ( e, k ) , where we let R i ( e, k ) , [ L i V i ]( e, k ) + τ ki (cid:16) α i k C ki e i + ˆ D ki ˆ ξ i k − k ω i k (cid:17) + X j ∈ V kii θ kij (cid:16) β ij k H ij e i + G ij w ij k − k ω (1) ij k (cid:17) + e ′ i (cid:16) X j : i ∈ V kjj ϑ kji β ji H ′ ji H ji (cid:17) e i − X j ∈ V kii ϑ kij k ω (2) ij k . (27)By completing the squares, one can establish that R i ( e, k ) ≤ e ′ i U i e i + 2 e ′ i X ki X j ∈ V kii K kij H ij e j + γ ( k ξ k + k ξ i k ) + γ X j ∈ V kii k w ij k , (28)where U i = X ki (cid:16) A − ˆ B ( ˆ D ki ) ′ ( E ki ) − C ki (cid:17) + (cid:16) A − ˆ B ( ˆ D ki ) ′ ( E ki ) − C ki (cid:17) ′ X ki + M X l =1 λ kl X li + (cid:16) τ ki + X j ∈ V kii (cid:16) θ kij + 1 θ kij (cid:17)(cid:17) X ki X ki + X j : i ∈ V kjj ϑ kji β ji H ′ ji H ji + 1 γ X ki ˆ B (cid:16) I − ( ˆ D ki ) ′ ( E ki ) − ˆ D ki (cid:17) ˆ B ′ X ki − γ ( C ki ) ′ ( E ki ) − C ki − γ X j ∈ V kii H ′ ij F − ij H ij .
9e now observe that it follows from the LMI (15) that forany nonzero collection of vectors e i , e j ∈ R n e ′ i U i e i + 2 e ′ i X ki X j ∈ V kii K kij H ij e j + ( p ki + q ki ) k e i k − e ′ i X j ∈ V kii e j < N X j =1 π kij ( e ′ j X kj e j ) , (29)where π kij are elements of the N × N matrix Π k , defined as π kij = − δ i , j = i, δ j q kj +1 , j ∈ V k i i , , otherwise . (30)Together with (28), the latter inequality leads to R i ( e, k ) + ( p ki + q ki ) k e i k − e ′ i X j ∈ V kii e j < γ k ˆ ξ i k + γ X j ∈ V kii k w ij k + X j ∈ V ki ∪{ i } π kij V j ( e j , k ) . (31)It is easy to verify using (30) that all components of thevector ′ N Π k are negative and do not exceed − ǫ , where ǫ = min i,k δ i q ki +1 . Hence, it follows from (27), (31) thatthe following dissipation inequality holds for all e i , ξ , ξ i , w ij , and for all uncertainty signals ω i ( t ) , ω (1) ij ( t ) , ω (2) ij ( t ) satisfying the constraints (12) N Ψ k ( e ) + [ L V ]( e, k ) ≤ − ǫV ( e, k )+ γ N X i =1 k ξ i k + k ξ k + X j ∈ V kii k w ij k . (32)The statement of Theorem 1 now follows from (32). Thiscan be shown using the same argument as that used to derivethe statement of Theorem 1 in [18] from a similar dissipationinequality. Indeed, let ξ, ξ i ∈ L [0 , ∞ ) , i = 1 , . . . , N . Sinceequation (13) defines ( e ( t ) , η ( t )) to be a Markov process,we obtain from (32) using the Dynkin formula that E h V ( e ( t ) , η ( t )) (cid:12)(cid:12)(cid:12) e ( s ) , η ( s ) i − V ( e ( s ) , η ( s ))+ E (cid:20)Z ts ( ǫV ( e ( t ) , η ( t )) + N Ψ η ( t ) ( e ( t )) dt (cid:12)(cid:12)(cid:12) e ( s ) , η ( s ) (cid:21) ≤ γ E "Z ts N X i =1 k ξ i ( t ) k + k ξ ( t ) k + N X j =1 a η ( t ) ij k w ij ( t ) k ! dt (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) e ( s ) , η ( s ) . (33) Here E [ ·| e ( s ) , η ( s )] is the expectation conditioned on the σ -field generated by ( e ( t ) , η ( t )) , t ≤ s . We now observe thatthe processes v ( t ) , V ( x ( t ) , η ( t )) , φ ( t ) , N X i =1 k ξ i ( t ) k + k ξ ( t ) k + N X j =1 a η ( t ) ij k w ij ( t ) k ,ψ ( t ) , N Ψ η ( t ) ( e ( t )) + ǫV ( e ( t ) , η ( t )) satisfy the conditions of Lemma 2. This leads to the con-clusion that R ∞ ( N Ψ η ( t ) ( e ( t ))) + εV ( e ( t ) , η ( t ))) dt < ∞ a.s., and also lim t →∞ V ( e ( t ) , η ( t )) < ∞ a.s.. Dueto the condition X i > for all i , we conclude that lim t →∞ k e i ( t ) k exists and R ∞ k e i ( t ) k dt < ∞ a.s.. Thisimplies lim t →∞ e i ( t ) = 0 with probability 1 for all i andarbitrary disturbances ξ, ξ i , w ij ∈ L [0 , ∞ ) ; i.e., (9) holds.In the case where ξ i = 0 , w ij = 0 , ξ = 0 , the above observa-tion immediately yields the statement of the theorem aboutinternal stability of the system (13), (16) with probability1. The claim of internal exponential mean-square stabilityfollows directly from (32), since Ψ k ≥ by definition.Also, by taking the expectation conditioned on e i (0) = x , η (0) = m on both sides of (33) and then letting t → ∞ ,we obtain condition (7), in which P = Nγ P Ni =1 X m i .Condition (8) follows from (33) in a similar manner.Taking the expectation conditioned on e i (0) = x , η (0) = m on both sides of (33), then dropping thenonnegative term R t N Ψ η ( t ) dt and letting t → ∞ , weestablish that E x ,m R ∞ V ( e ( t ) , η ( t )) dt < ∞ . Hence E x ,m R ∞ k e ( t ) k dt < ∞ . ✷ References [1] A. Bain and D. Crisan.
Fundamentals of Stochastic Filtering .Springer, NY, 2009.[2] E. F. Costa and J. B. R. do Val. On the observability and detectabilityof continuous-time Markov jump linear systems.
SIAM J. Contr.Optim. , 41:1295-1314, 2002.[3] L. El Ghaoui, F. Oustry, and M. Ait Rami. A cone complementaritylinearization algorithm for static output-feedback and relatedproblems.
IEEE Tran. Automat. Contr. , 42:1171-1176, 1997.[4] W. M. Haddad, V. Chellaboina, and S. G. Nersesov. Vectordissipativity theory and stability of feedback interconnections forlarge-scale non-linear dynamical systems.
Int. J. Contr. , 77:907-919,2004.[5] W. M. Haddad and J. R. Corrado. Robust resilient dynamic controllersfor systems with parametric uncertainty and controller gain variations.
Int. J. Contr. , 73:1405-1423, 2000.[6] C. Langbort, V. Gupta, and R. Murray. Distributed control overfailing channels.
Networked Embedded Sensing and Control , pp.325–342. Springer, 2006.[7] L. Li, V. Ugrinovskii, and R. Orsi. Decentralized robust controlof uncertain Markov jump parameter systems via output feedback.
Automatica , 43:1932-1944, 2007.
8] I. Matei, N. C. Martins, and J. S. Baras. Consensus problems withdirected Markovian communication patterns. In
Proc. 2009 ACC ,2009.[9] R. Olfati-Saber. Distributed Kalman filtering for sensor networks.In
Proc. 46th IEEE CDC , 2007.[10] R. Olfati-Saber and R. M. Murray. Consensus problems in networksof agents with switching topology and time-delays.
IEEE Trans.Automat. Contr. , 49:1520-1533, 2004.[11] R. Orsi, U. Helmke, and J. B. Moore. A Newton-like methodfor solving rank constrained linear matrix inequalities.
Automatica ,42:1875-1882, 2006.[12] I. R. Petersen and A. V. Savkin.
Robust Kalman Filtering for Signalsand Systems with Large Uncertainties . Birkh¨auser Boston, 1999.[13] W. Ren and R. W. Beard. Consensus seeking in multiagent systemsunder dynamically changing interaction topologies.
IEEE Tran.Automat. Contr. , 50:655-661, 2005.[14] H. Robbins and D. Siegmund. A convergence theorem fornonnegative almost supermartingales and some applications. In J. S.Rustagi, Ed.,
Optimizing Methods in Statistics . Academic Press, 1971.[15] B. Shen, Z. Wang, and Y. S. Hung. Distributed H ∞ -consensusfiltering in sensor networks with multiple missing measurements:The finite-horizon case. Automatica , 46:1682-1688, 2010.[16] B. Shen, Z. Wang, and X. Liu. A stochastic sampled-data approachto distributed h ∞ filtering in sensor networks. IEEE Trans. CircuitsSyst. I: Regular Papers , 58(9):2237–2246, 2011.[17] M. V. Subbotin and R. S. Smith. Design of distributed decentralizedestimators for formations with fixed and stochastic communicationtopologies.
Automatica , 45:2491-2501, 2009.[18] V. Ugrinovskii. Distributed robust filtering with H ∞ consensus ofestimates. Automatica , 47:1-13, 2011.[19] V. Ugrinovskii and C. Langbort. Distributed H ∞ consensus-basedestimation of uncertain systems via dissipativity theory. IET ControlTheory & App. , 5:1458-1469, 2010.[20] J. Xiong, V. A. Ugrinovskii, and I. R. Petersen. Local mode dependentdecentralized stabilization of uncertain Markovian jump large-scalesystems.
IEEE Tran. Automat. Contr. , 54:2632-2637, 2009.[21] J. Yao, Z.-H. Guan, and D. J. Hill. Passivity-based control andsynchronization of general complex dynamical networks.
Automatica ,45:2107-2113, 2009.[22] H. Zhang, F. L. Lewis, and A. Das. Optimal design forsynchronization of cooperative systems: State feedback, observer andoutput feedback.
IEEE Tran. Automat. Contr. , 56:1948-1952, 2011., 56:1948-1952, 2011.