From Global Linear Computations to Local Interaction Rules
aa r X i v : . [ m a t h . O C ] N ov From Global Linear Computations to Local Interaction Rules
Zak Costello and Magnus EgerstedtSchool of Electrical and Computer EngineeringGeorgia Institute of Technology { zak.costello,magnus } @gatech.edu Abstract — A network of locally interacting agents can bethought of as performing a distributed computation. But notall computations can be faithfully distributed. This paper inves-tigates which global, linear transformations can be computedusing local rules, i.e., rules which rely solely on informationfrom adjacent nodes in a network. The main result states thata linear transformation is computable in finite time using localrules if and only if the transformation has positive determinant.An optimal control problem is solved for finding the localinteraction rules, and simulations are performed to elucidatehow optimal solutions can be obtained.
I. I
NTRODUCTION
One common theme when designing control and coordina-tion mechanisms for distributed, multi-agent systems is thatthe information, on which decisions are based, is restrictedto be shared among agents that are adjacent in the underlyinginformation-exchange network, e.g., [1], [2], [3], [4]. As aresult, local rules are needed for processing the informationand coordinating the agents in the network in such a waythat some global objective is achieved. Problems that fitthis description can be found in a variety of applications,including power systems [5], [6], [7], formation control [8],[9], [10], [11], [12], distributed sensor networks [13], [14],smart textiles [15], and distributed optimization [16], [17]. Inthis paper we take initial steps towards developing a generaltheory of local implementability/computability of such globalbehaviors.As such, one key aspect of algorithm design is the defi-nition of local interaction rules that produce desired globalbehaviors. An example of this are consensus algorithms forcomputing averages in a distributed manner. In fact, consen-sus plays a role in many different applications, includingmulti-agent robotics, distributed sensor fusion, and powernetwork control, e.g., [3], [6], [18]. To this end, let the scalarstate of each node in a network be x i ∈ R , with initialcondition x i ( t ) = ξ i , i = 1 , . . . , n , where n is the numberof nodes in the network. By stacking the states togetherin x ∈ R n , we implicitly perform an asymptotic, globalcomputation through the so-called consensus equation ˙ x i = − X j ∈ N i ( x i − x j ) , (1)where N i encodes a neighborhood relationship in the un-derlying information-exchange network. And, as long as thenetwork is connected and undirected, all node values willconverge to the same value, namely the average of the initial conditions, e.g., [2]. In other words, lim t →∞ x ( t ) = 1 n . . . ... . . . ... . . . ξ, (2)where ξ is the vector containing all the initial node values.As such, the consensus equation is asymptotically computingthe average, which is a global property since it relies on thestate of every node in the network.In this work, we are interested in problems where networksare tasked with computing arbitrary linear transformationsof the initial node states. In particular, we answer twofundamental questions: What global, linear transformationscan be computed using local rules? How do we find the localrules that would compute a given linear transformation?
This paper answers both questions and presents existenceguarantees of local rules for a given computation.Some work has been done in the general area of obtainingglobal information with local interactions. In [19], a fixedweighting scheme was used to compute linear transfor-mations on networks. That work focused on cases whereeach agent computes the same linear transformation, as isthe case with the consensus computation where each nodecomputes the average, while we, in this paper, do not wishto restrict ourselves to this special case. In a certain sense,the investigation in [20] follows this line of inquiry as well.There, quadratic invariance was used to establish whetheror not a convex optimization problem exists whose solutionis a decentralized implementation of a centralized feedbackcontroller. [21] further expounds on this idea and provides apractical, graph theoretic method for finding this distributedcontroller. Our work distinguishes itself from this body ofwork by using a time varying weighting method, whichadmits the computation of global, linear transformations infinite time.In fact, in this paper, we consider computations thatare to be performed using local rules over a static andundirected information-exchange network. The local rules,once obtained, admits a decentralized implementation, where“decentralized” in this context means that each node inthe network only needs to communicate state informationamong adjacent nodes in the network. In particular, weask if it is possible to define local interaction laws suchthat x ( t f ) = T ξ , given the linear transformation T andthe initial conditions x ( t ) = ξ . Necessary and sufficientonditions are given for this to be possible, and they statethat local interaction rules exist if and only if T has positivedeterminant.The remainder of this paper is organized as follows: InSection II, the problem under consideration is introducedand the general class of admissible, local interaction rulesis established. In Section III, necessary and sufficient condi-tions are presented under which global, linear computationscan be performed in a decentralized manner. In Section IV,an optimal control problem is formulated, which providesa way to find interaction rules, the time varying weightingfunctions numerically, and in Section V two instantiations ofthe method are presented together with simulation results.II. P ROBLEM D EFINITION
To formalize what is meant by local interactions, wefirst need to discuss the information-exchange network overwhich the interactions are defined. To this end, let V be avertex set with cardinality n , and E ⊂ V × V be an edge setwith cardinality m , where we insist on ( i, i ) ∈ E, ∀ i ∈ V ,as well as ( i, j ) ∈ E ⇔ ( j, i ) ∈ E . Let G be the graph G = ( V, E ) , where the assumptions on E imply that G is undirected and contains self-loops. We moreover assumethat G is connected. As the main purpose with G is toencode adjacency information in the information-exchangenetwork, we introduce the operator sparse ( G ) to capturethese adjacencies, and we say that an n × n matrix M ∈ sparse ( G ) if ( i, j ) E ⇒ M ij = 0 .There are a number of different ways in which localinteractions can be defined. In this paper, we assume thatthey are given by time-varying, piecewise continuous weightsassociated with the edges in the network. If x i ∈ R is thescalar state associated with node i ∈ V , we define a localinteraction as a continuous-time process ˙ x i ( t ) = X j | ( i,j ) ∈ E w ij ( t ) x j ( t ) . (3)Note that we do not insist on w ij = w ji even though G isundirected, as shown in Figure 1.If we stack the states together in x = [ x , . . . , x n ] T ∈ R n ,what we mean by local interactions is thus ˙ x ( t ) = W ( t ) x ( t ) , W ( t ) ∈ sparse ( G ) , (4)with solution x ( t ) = Φ( t, t ) x ( t ) , (5)where Φ is the state transition matrix associated with thesystem in Equation 4, e.g., [22]..Now, the purpose of the local interactions is to perform aglobal, linear computation. In other words, given the n × n matrix T and the initial condition x ( t ) = ξ , what we wouldlike to do is find W ( t ) ∈ sparse ( G ) , t ∈ [ t , t f ] , such that x ( t f ) = T ξ. (6) v v v v w w w w w w w w w w W = w w w w w w w w w w ∈ sparse ( G ) Fig. 1. An example of the sparsity structure and the local interaction rulesused in this paper.
But, comparing this expression to Equation 5, this simplymeans that what we would like is Φ( t f , t ) = T. (7)If this was indeed the case, then the local interactions, asdefined through W ( t ) , would indeed compute T ξ over theinterval [ t , t f ] for all possible values of ξ , i.e., one can thinkof the network as a black box that takes ξ as the input attime t and, at time t f , returns T ξ as the output.As a final observation before we can formulate the generalproblem of performing global, linear computations usinglocal interactions, we note that state transition matrix satisfiesthe same dynamics as Equation 4, i.e., d Φ( t, t ) dt = W ( t )Φ( t, t ) , (8)with initial condition Φ( t , t ) = I , where I is the n × n identity matrix. Problem 1 [Local Computation]
Given a linear transformation T and a connected graph G ,find W ( t ) ∈ sparse ( G ) , t ∈ [ t , t f ] , such that ˙ X ( t ) = W ( t ) X ( t ) , (9) with boundary conditions X ( t ) = I, X ( t f ) = T . III. O
N THE E XISTENCE OF S OLUTIONS
The main point with this paper is an exploration of whatlinear transformations T admit a local implementation, i.e.,for what T Problem 1 has a solution. In this section, wedevelop necessary and sufficient conditions for this to be thecase.We start by observing that since X ( t ) is really the statetransition matrix Φ( t, t ) , it is always invertible, X ( t ) − = Φ( t, t ) − = Φ( t , t ) . (10)As a direct consequence of this, T has to be invertible for asolution to Problem 1 to exist, i.e., we need that det ( T ) = . But, as X (0) = I , we have that det ( X (0)) = 1 > .Moreover, the determinant of a matrix depends continuouslyon its entries, and therefore the only way for det ( X ( τ )) < for some τ ∈ ( t , t f ] , there has to exist a τ ′ ∈ ( t , τ ) such that det ( X ( τ ′ )) = 0 . But this can not happen since X is always invertible. From this it directly follows that forProblem 1 to have a solution, T has to satisfy det ( T ) > .To state this fact more compactly, let GL n + ( R ) denote theset of all n × n , real matrices with positive determinant. Wehave thus established the following necessary condition forthe existence of a solution: Lemma 1.
A solution to Problem 1 exists only if T ∈ GL n + ( R ) . One consequence of Lemma 1 is that it is impossible to uselocal rules, as understood in this paper, to achieve consensusin finite time. This follows directly from the fact that theconsensus computation is given by the linear map T cons = 1 n T , (11)where is a vector of length n , with all entries equal to one.And, rank ( T cons ) = 1 , i.e., det ( T cons ) = 0 . We state this fact as a corollary: Corollary 1.
There is no solution to Problem 1 which admitsfinite time consensus. Now that we have established necessary conditions forProblem 1 to have a solution, we turn our attention to suf-ficient conditions. And, surprisingly enough, T ∈ GL n + ( R ) turns out to be both necessary and sufficient for a solutionto exist, which constitutes the main result in this paper: Theorem 1.
A solution to Problem 1 exists if and onlyif T ∈ GL n + ( R ) . As we have already established sufficiency, what must beshown is that whenever det ( T ) > , there is a W ( t ) ∈ sparse ( G ) that drives X from I to T . The remainder of thissection is devoted to the establishment of this fact. However,before we can give the proof to Theorem 1, a number ofsupporting results are needed, involving the controllabilityof nonlinear, drift-free systems, i.e., systems of the form ˙ x = p X i =1 g i ( x ) u i , (12)where x ∈ R n is the state of the system, and u , . . . , u p ∈ R are the control inputs. For the sake of easy reference, we startby recalling Chow’s Theorem, as formulated in [23], for suchdrift-free systems: Note that this applies to any agreement across the nodes, i.e., not onlyto average consensus.
Theorem 2 (Chow’s Theorem, e.g. [23]) . The system inEquation 12 is locally controllable about a point x if andonly if dim (∆( x )) = n, (13) where ∆ is the involutive closure of the distributionspan { g , . . . , g p } . The system is moreover controllable if it is locally con-trollable everywhere. And, the proof that T ∈ GL n + ( R ) issufficient for Problem 1 to have a solution will hinge onshowing that the dynamics, as defined through the local inter-action rules in Equation 4, is indeed controllable everywhereon GL n + ( R ) . To this end, we first must rewrite the dynamicsin Problem 1 on the appropriate form. For this, we needthe index matrix I ij ∈ R n × n , which has a one at the i throw and j th column, and zeros everywhere else. The indexmatrix allows us to rewrite ˙ X = W X as ˙ X = n X i =1 n X j =1 W ⊙ I ij X , (14)where the ⊙ symbol represents element-wise matrix product,i.e., ˙ X = w . . . ... . . . ... . . . + . . . + . . . ... . . . ... . . . w nn X , (15)where we have surpressed the explicit dependence on t forthe sake of notational ease.Rearranging the terms and letting g ij ( X ) = I ij X , (16)we get the drift-free matrix formulation ˙ X = n X i =1 X j | ( i,j ) ∈ E g ij ( X ) w ij . (17)To clarify, g ij ( X ) is a matrix whose i th row contains the j th row of X , with the rest of the elements in the matrixequal to , g ij ( X ) = 1 ... i − ii + 1 ... n . . . ... . . . ... . . . X j . . . X jn . . . ... . . . ... . . . . (18)As a final step towards a formulation that is amenable toChow’s Theorem, let the vectorized version of g ij be giveny ~g ij = vec ( g ij ) , resulting in the vectorized version ofEquation 17,vec ( ˙ X ) = n X i =1 X j | ( i,j ) ∈ E ~g ij ( X ) w ij . (19)The first order of business towards establishing controlla-bility of this system is the derivation of the Lie brackets forthe system in Equation 19. Lemma 2. [ ~g ij ( X ) , ~g kl ( X )] = − ~g il ( X ) if j = k, i = l~g kj ( X ) if i = l, j = k otherwise (20) Proof.
The Lie bracket [ ~g ij , ~g kl ] is given by ∂~g kl ∂ vec ( X ) ~g ij − ∂~g ij ∂ vec ( X ) ~g kl , (21)where we have suppressed the explicit dependence on X .Substitution of Equation 16 into Equation 21, the aboveexpression yields ∂ ( vec ( I kl X )) ∂ vec ( X ) vec ( I ij X ) − ∂ ( vec ( I ij X )) ∂ vec ( X ) vec ( I kl X ) , (22)which can be rewritten, using the Kronecker product, as ∂ (( I ⊗ I kl ) vec ( X )) ∂ vec ( X ) ( I ⊗ I ij ) vec ( X ) − ∂ (( I ⊗ I ij ) vec ( X )) ∂ vec ( X ) ( I ⊗ I kl ) vec ( X ) (23)Taking the derivatives yields ( I ⊗ I kl )( I ⊗ I ij ) vec ( X ) − ( I ⊗ I ij )( I ⊗ I kl ) vec ( X ) . (24)Using the mixed product property of the Kronecker product,Equation 21 can be further simplified as ( I ⊗ I kl I ij ) vec ( X ) − ( I ⊗ I ij I kl ) vec ( X ) , (25)i.e., the Lie bracket in Equation 21 becomes [ ~g ij ( X ) , ~g kl ( X )] = vec ( I kl I ij X ) − vec ( I ij I kl X ) . (26)Now, using the fact that, I ij I kl = I il if j = k and I ij I kl =0 otherwise, we can break down Equation 26 into 3 cases:First if j = k and i = j we get [ ~g ij , ~g kl ] = − vec ( I il X ) = − ~g il . The second case occurs when i = l and j = k , inwhich case [ ~g ij , ~g kl ] = vec ( I kj X ) = ~g kj . Otherwise, the Liebracket is , and the lemma follows.Now that Lie brackets can be computed in general forthis problem, we must determine if the involutive closureof the distribution associated with the system in Equation19 contains enough independent vector fields for local con-trollability. To help with this determination, we provide thefollowing lemma. Lemma 3.
If node i is path-connected to node j , then ~g ij ( X ) is in the distribution ∆( X ) . Proof. That node i is path-connected to node j means thatthere is a path through adjacent nodes in the graph G thatstarts at node i and ends at node j . Assume that the pathgoes through the nodes N , . . . , N q , i.e., N is adjacent to N , N is adjacent to N , and so forth, while N = i and N q = j . Since these nodes are adjacent, we, by definition,have that ~g N N , ~g N N , . . . , ~g N q − N q ∈ ∆( X ) .The involutive closure contains every possible Lie bracketthat can be recursively created from elements ∆( X ) , whichimplies that the problem is to create ~g ij from some com-bination of Lie brackets from elements in ∆( X ) . And,from Lemma 2, we know that [ ~g N N , ~g N N ] is equal to − ~g N N . Applying Lemma 2 again gives [ − ~g N N , ~g N N ] = ~g N N . This procedure can be repeated until we arriveat one of two possible cases. If q is even, the resultis [ − ~g N N q − , ~g N q − N q ] = ~g N N q . If q is odd we get [ ~g N N q − , ~g N q − N q ] = − ~g N N q . In either case, we are ableto construct ~g N N q from previous Lie brackets, as shownin Figure 2. And, as N = i and N q = j , we have ~g ij ∈ ∆( X ) . ! g [ ! g , ! g ] = − ! g ! g ! g v v v v [ − ! g , ! g ] = ! g Fig. 2. An example of the construction in the proof of Lemma 3 withnode i and j being represented by v and v , respectively. To establish that the system is controllable on GL n + ( R ) , ∆( X ) must have rank n everywhere on this set, which isthe topic of the next lemma. Lemma 4. If G is connected then ∆( X ) has dimension n if and only if rank ( X ) = n .Proof. To prove this lemma, we need to show that theimplication goes both ways.Assume first that dim (∆( X )) = n . If G is connectedthen, by Lemma 3, ∆( X ) = span { ~g ij , ∀ ( i, j ) ∈ V × V } . (27)Since V × V has cardinality n , we can conclude that each ~g ij is in ∆( X ) .For the purpose of the proof, it is convinient to go back tothe matrix formulation, and we recall that ~g ij = vec ( g ij ) . Assuch, we will use the matrix form g ij to construct X . And,since the goal is to form a matrix with rank n , only n linearlyindependent matrices are needed. So, we arbitrarily choose toform X from the “diagonal” set { g , g , . . . , g nn } . Usingthe fact that g ij = I ij X , we can write, n X i =1 g ii = n X i =1 I ii X , hich simplifies to n X i =1 g ii = X . (28)And, since X is a linear combination of n linearly inde-pendent matrices, rank ( X ) = n , and the first implicationfollows.Next, we must show thatrank ( X ) = n ⇒ dim (∆( X )) = n , (29)which we do by contradiction. Using the expression g ij = I ij X , n matrices can be formed from X . Let us assumethat they are not linearly independent. This implies that thereexists a set of coefficients α ij such that, for some ( k, l ) , X ( i,j ) =( k,l ) g ij α ij = g kl . (30)Since X has full rank, X can be removed from Equation 30based on the fact that g ij = I ij X , yielding X ( i,j ) =( k,l ) I ij α ij = I kl . (31)By definition of the index matrix, Equation 31 cannot betrue, since every matrix in the sum on the left has a value ofzero where I kl has value of . Therefore, we have reacheda contradiction and can conclude that dim (∆( X ) = n .Since X is really a state transition matrix, i.e., it is indeedinvertible (with rank ( X ) = n ), the system in Equation 17 islocally controllable everywhere on GL n + ( R ) as long as theunderlying graph G is connected: Theorem 3.
The system ˙ X = W X , W ∈ sparse ( G ) is locally controllable everywhere on GL n + ( R ) if G is con-nected. Theorem 3 and Lemma 1 give us all the ammunitionneeded to prove the main result in this paper, namelyTheorem 1:
Proof of Theorem 1.
Lemma 1 tells us that a solution onlyexists if T ∈ GL n + ( R ) , so what remains is to establish thatthis is indeed sufficient. Hence, assume that T ∈ GL n + ( R ) .Since I ∈ GL n + ( R ) , and GL n + ( R ) is connected [24], thereis a continuous curve of matrices in GL n + ( R ) that connects I and T . And, by Corollary 3, every point along the pathconnecting I and T is locally controllable. The system beingdrift-free moreover implies that it can flow along this curve,e.g., [25]. Therefore, a solution to Problem 1 exists if T ∈ GL n + ( R ) .If we return to the conensus problem, we have alreadyestablished that T cons in Equation 11 is not computable in finite time using local rules. However, consider instead thetransformation T cons = /n /n · · · /n · · · ... ... . . . ... · · · . (32)We have det ( T cons ) = 1 n (33)and, as such, it is computable using local rules. In this case,the network average is only computed by a single node (node1 in this case), while the remaining nodes return to theirinitial values at the end of the computation. This can in factbe generalized to any scalar, non-zero, linear map ℓ : R n → R through T ℓ ξ = ℓ ( ξ ) ξ ... ξ n , where we have assumed that ℓ ( ξ ) depends on ξ . The pointwith this is that it is possible to compute any scalar, non-zero, linear map as long as the computation only has to takeplace at a single node.
IV. O
PTIMAL L OCAL I NTERACTIONS
Just because we know that a computation
T ξ can be doneusing local rules it does not follow that we can (easily) findthese rules, encoded through W ( t ) ∈ sparse ( G ) , such that ˙ X = W X , X ( t ) = I, X ( t f ) = T . In this section, weaddress this problem in the context of optimal control.Let the cost be given by J ( W ) = t f Z k W ( t ) k F dt, (34)where k·k F is the Frobenius norm. The resulting constrainedminimization problem becomes Problem 2 [Optimal Local Interactions] min W J ( W ) = t f Z k W ( t ) k F dt (35) such that ˙ X = W X W ( t ) ∈ sparse ( G ) , ∀ t ∈ [ t , t f ] X ( t ) = I, X ( t f ) = T. (36)The Hamiltonian associated with Problem 2 (e.g., [26]), withcostate matrix λ , is given by H = vec ( λ ) T vec ( W X ) + 12 k W k F . (37) If not, simply pick another node in the network that ξ does depend on,as the node where the computation takes place. e can rewrite the Hamiltonian as H = n X i =1 X j | ( i,j ) ∈ E n X k =1 λ ik w ij X jk + 12 n X i =1 X j | ( i,j ) ∈ E w ij . (38)The optimality conditions are ∂H∂w ij = n X k =1 λ ik X jk + w ij , (39)i.e., the optimal weights are given by w ij = − n X k =1 λ ik X jk , (40)which yields m + n optimality conditions. This is also thenumber of nonzero values in the W matrix.We get the costate equations from the derivative of theHamiltonian with respect to X : ˙ λ ij = − ∂H∂ X ij = − X k | ( i,k ) ∈ E w ki λ kj . (41)By substituting the optimality conditions into both the stateand costate equations, we get n equations with initial andfinal conditions on X ij . The resulting, two-point boundaryproblem becomes ˙ X ij = − X k | ( i,k ) ∈ E X kj n X l =1 λ il X kl X ( t ) = I, X ( t f ) = T (42) ˙ λ ij = X k | ( i,k ) ∈ E λ kj n X l =1 λ kl X il , which can be solved numerically, as will be seen in the nextsection. V. S IMULATIONS AND E XAMPLES
Computing linear transformation of states can be useful ina variety of network applications. In this section we exploretwo concrete examples. The first involves improving theconvergence rates in distributed computations and the secondinvoles information exchange among non-local agents.
A. Improving Convergence Rates
Consider, again, the consensus equation, ˙ x = − L s x, (43)where x ∈ R n is the state of the system and L s isthe graph Laplacian associated with a given, sparse yetconnected graph G s . These dynamics are known to convergeexponentially in the algebraic connectivity of the graph G s ,e.g.,[3], where the algebraic connectivity is given by thesecond smallest eigenvalue of L s .For some applications this convergence rate may not befast enough. Instead, one might want to make the systembehave as if the graph was more dense; thus improve the con-vergence rate. Let G d be a dense graph, with correspondingconsensus dynamics ˙ z = − L d z, (44) where z represents a “desired” state (how we would like x to behave) and L d is the graph Laplacian of G d . The desiredsystem in Equation 44 has state transition matrix Φ d ( t, t ) ,with Φ d ( t , t ) = I , and Φ d ( t f , t ) = e − L d ( t f − t ) , (45)which we thus set as the linear transformation we would liketo compute by the original, sparse network.In other words, let T = e − L d ( t f − t ) . The goal is tocompute T using local time varying weights on the graph G s .And, since matrix exponentials are invertible, T ∈ GL n + ( R ) ,and a solution does indeed exist.As an instantiation of this, consider a 5 node system. Wecan solve Problem 2 numerically, using test-shooting, for thissystem in order to find W such that ˙ X = W X , W ∈ sparse ( G s ) X ( t ) = I, X ( t f ) = e − L d ( t f − t ) . (46)The solution to this problem yields a set of time varyingweights W and matrices X . The weights can be executed ina decentralized manner, once they have been obtained (usingcentralized computations). The result is a sparse network thatacts at a higher rate, as if it was indeed dense. This is shownin Figures 3 - 5. In Figure 5, the agreement error, k x ( t ) − /n T x (0) k is shown both for the original (sparse) consensus dynamics(Equation 43) and for the optimal, “densified” version. - - @ s D Φ Optimal State Trajectories
Fig. 3. This plot shows the evolution of each element of the state transitionmatrix Φ( t, t ) over the time interval from t to t f . Since Φ( t , t ) = I all elements are either or initially. B. Swapping Node Values
As another example, consider the situation when thelinear transformation represents a reordering (or swapping)of states. For a node case, where agents 1 and 2 and agents .00 0.02 0.04 0.06 0.08 0.10 - - @ s D w Optimal EdgeWeights
Fig. 4. The local weight functions each agent uses in order to compute thelinear transformation given by e − L d ( t f − t ) . The weights are computed bysolving Problem 2 numerically. @ s D E rr o r M agn i t ude Agreement Error
Fig. 5. The agreement errors for the original, sparse consensus equation(dashed) and the “densified” version (solid). As expected, the latter has ahigher rate of convergence. T swap = . (47)However, the linear interpolation between I and T swap con-tains a singular matrix, which makes the two-point boundaryproblem numerically ill-conditioned when using shootingmethods, e.g., [27]. A way around this problem is to avoidthis singular matrix by solving two sequential two-pointboundary problems.As an example, in the first iteration, we let the boundaryconditions be X ( t ) = I, X (( t f − t ) /
2) = T . For thesecond iteration, they are X (( t f − t ) /
2) = T , X ( t f ) = T swap , where T = . (48) This sequential approach avoids the numerical ill-conditioning, and the solution is shown in Figures 6 - 8. - @ s D Φ State Transition Matrix Element Trajectories
Fig. 6. The evolution of the state transition matrix for the 4-node swappingproblem, with Φ( t , t ) = I, Φ(( t f − t ) / , t ) = T , and Φ( t f , t ) = T swap . - - - @ s D w Edge Weights
Fig. 7. The weight functions define the local interactions needed to achievethe swap in the 4-node case.
VI. C
ONCLUSIONS
In this paper, a step was taken towards computing arbitraryglobal functions on networks with local interaction rules. Inparticular, it presented a method which allows a networkedsystem to compute global, linear transformations using onlylocal rules.We derived necessary and sufficient conditions underwhich it is possible to use a distributed, time-varying weight-ing scheme to compute the transformation T for undirected,connected networks with fixed topology. Specifically, weshowed that the necessary and sufficient condition for T tobe locally computable is that it has positive determinant, i.e., T ∈ GL n + ( R ) . A CKNOWLEDGMENT
This work was sponsored in part by a grant from the USAir Force Office for Sponsored Research. The authors would .00 0.05 0.10 0.15 0.20 - @ s D x State Trajectory
Fig. 8. The evolution of the node states for the swap problem. The initialstate is x ( t ) = [1 , , , T and the final state is x ( t f ) = [2 , , , T ,i.e., the first and second states swapped values and the third and fourth stateswapped values. like to thank Professor Mark Costello at the Georgia Instituteof Technology for his advice regarding this work.R EFERENCES[1] F. Bullo, J. Cortes, and S. Martnez,
Distributed Control of Robotic Net-works. A Mathematical Approach to Motion Coordination Algorithms .Princeton University Press, 2009.[2] M. Mesbahi and M. Egerstedt,
Graph theoretic methods in multiagentnetworks . Princeton University Press, 2010.[3] R. Olfati-Saber, J. A. Fax, and R. M. Murray, “Consensus andcooperation in networked multi-agent systems,”
Proceedings of theIEEE , vol. 95, no. 1, pp. 215–233, 2007.[4] W. Ren and R. W. Beard,
Distributed Consensus in Multi-vehicleCooperative Control . Springer-Verlag, 2008.[5] A. L. Dimeas and N. D. Hatziargyriou, “Operation of a multiagentsystem for microgrid control,”
Power Systems, IEEE Transactions on ,vol. 20, no. 3, pp. 1447–1455, 2005.[6] T. Ramachandran, Z. Costello, P. Kingston, S. Grijalva, and M. Egerst-edt, “Distributed power allocation in prosumer networks,” in
IFACNecsys , 2012.[7] S. Grijalva, M. Costley, and N. Ainsworth, “Prosumer-based controlarchitecture for the future electricity grid,” in
IEEE Multi-Conferenceon Systems and Control , 2011.[8] T. Balch and R. C. Arkin, “Behavior-based formation control formultirobot teams,”
Robotics and Automation, IEEE Transactions on ,vol. 14, no. 6, pp. 926–939, 1998.[9] M. Ji and M. Egerstedt, “Distributed coordination control of multi-agent systems while preserving connectedness,”
IEEE Transactionson Robotics , vol. 23, no. 4, pp. 693–703, 2007.[10] A. Jadbabaie, J. Lin, and A. S. Morse, “Coordination of groupsof mobile autonomous agents using nearest neighbor rules,”
IEEETransactions on Automatic Control , vol. 48, no. 6, pp. 988–1001, 2003.[11] H. Tanner, A. Jadbabaie, and G. Pappas, “Stable flocking of mobileagents, part II : Dynamic topology,” in
Proc. 42nd IEEE Conf.Decision Control , 2003.[12] N. Michael and V. Kumar, “Controlling shapes of ensembles of robotsof finite size with nonholonomic constraints,” in
RSS , 2008.[13] K. Romer and F. Mattern, “The design space of wireless sensornetworks,”
Wireless Communications, IEEE , vol. 11, no. 6, pp. 54–61, 2004.[14] F. Zhang and N. Leonard, “Coordinated patterns of unit speed particleson a closed curve,”
Systems and Control Letters , vol. 56, no. 6, pp.397–407, 2007.[15] D. Marculescu, R. Marculescu, N. H. Zamora, P. Stanley-Marbell, P. K.Khosla, S. Park, S. Jayaraman, S. Jung, C. Lauterbach, W. Weber, et al. , “Electronic textiles: A platform for pervasive computing,”
Proceedings of the IEEE , vol. 91, no. 12, pp. 1995–2018, 2003. [16] J. Cort´es and F. Bullo, “Coordination and geometric optimizationvia distributed dynamical systems,”
SIAM Journal on Control andOptimization , vol. 44, no. 5, pp. 1543–1574, 2005.[17] A. Nedic, A. Ozdaglar, and A. Parrilo, “Constrained consensus and op-timization in multi-agent networks,”
IEEE Transactions on AutomaticControl , vol. 55, no. 4, pp. 922–938, 2010.[18] R. Olfati-Saber and J. S. Shamma, “Consensus filters for sensornetworks and distributed sensor fusion,” in
Decision and Control, 2005and 2005 European Control Conference. CDC-ECC’05. 44th IEEEConference on . IEEE, 2005, pp. 6698–6703.[19] S. Sundaram and C. N. Hadjicostis, “Distributed function calculationand consensus using linear iterative strategies,”
Selected Areas inCommunications, IEEE Journal on , vol. 26, no. 4, pp. 650–660, 2008.[20] M. Rotkowitz and S. Lall, “A characterization of convex problemsin decentralized control,”
Automatic Control, IEEE Transactions on ,vol. 51, no. 2, pp. 274–286, 2006.[21] J. Swigart and S. Lall, “A graph-theoretic approach to distributedcontrol over networks,” in
Decision and Control, 2009 held jointlywith the 2009 28th Chinese Control Conference. CDC/CCC 2009.Proceedings of the 48th IEEE Conference on . IEEE, 2009, pp. 5409–5414.[22] R. Brockett,
Finite Dimensional Linear Systems . John Wiley & Sons,Inc., 1970.[23] S. Sastry,
Nonlinear systems: analysis, stability, and control . SpringerNew York, 1999, vol. 10.[24] G. Strang,
Introduction to Linear Algebra . Wellesley-CambridgePress, 1993.[25] R. W. Brockett, “System theory on group manifolds and coset spaces,”
SIAM Journal on Control , vol. 10, no. 2, pp. 265–284, 1972.[26] D. Liberzon,
Calculus of variations and optimal control theory: aconcise introduction . Princeton University Press, 2012.[27] W. Press, S. Teukolsky, W. Vetterling, and B. Flannery,